software solutions / project leadership / agile coaching and training

Wanted: Technical problem solvers

Posted on April 12, 2018

Most developers are used to working in a team environment, where a business analyst writes requirements, a developer writes the code, and a tester tests the code. This is a widely accepted practice, yet something about it is inherently inefficient.

Don’t get me wrong, I’m not saying that doing things this way is bad. Yet anytime you add someone else to a process, it becomes inherently less efficient because those people have to communicate and get on the same page. Generally the inefficiencies are outweighed by the benefits of having people using their strengths to help move the process along… but not always. Let me explain.

On Agile projects, requirements are not the deliverable, requirements are just a means to an end. Requirements often serve other purposes, for example, communication with business partners about what is going to be built in order to make sure you’re building the right thing. But if you think about it, requirements are in a way inherently inefficient.

Software development is a series of translations. Our goal is to take business ideas and translate them into working software. This is obviously easier said than done. On a typical project, there are many levels of translations: business person expresses ideas to a business analyst, who turns those into requirements with acceptance criteria, which developers turn into code and automated tests, and QA people turn into test plans, which leads to some UAT process with users, which eventually gets deployed to production. This process has been proven over the years to work well, but there are many translations that have to happen during this process in order to software to get created.

My argument is that there are some cases where this process is more than we need. What if for certain projects or functionality, a person or small group of people could just go solve a problem? Forget about writing requirements using the typical methods and just streamline the process and get something delivered quickly. Think about the average ticket that a BA has to write. It takes a long time to type that up, and when they do, there still isn’t any code written.

Doing this takes a lot of skill. If you developer doing this, you need to be able to communicate with the business, understand what they really want, be able to speak to them in their terms, develop a working solution, sufficiently test your code, and work with business people to make sure everything was built correctly. But if you can do this and write just enough requirements and develop and execute a comprehensive test plan, you potentially could deliver working software much faster.

I would argue that this should be the next step in the career path for senior developers. Are you able to deliver working software without the assistance of business analysts and the validation of QA testers? If you’re able to do this, you’re able to keep progress moving while freeing up business analysts and QA testers to have time to focus on areas where they’re needed.

One of the projects I’ve done that I’m most proud of was a 6 month BI ETL project where I was the only one on the team. I had to gather and write up all the requirements (I didn’t skip this part because people needed to know how it worked), write the code, test it, and make sure it continued to work in production. This code is starting its 5th year in production now, and it’s allowed people on several systems to develop against a data model that is going to be around for a lot longer than my code.

How long would that 6 month project take if we had a business analyst and a tester on it? Would it have taken much less time? Probably, but certainly not a third of the time, because so much communication would have to take place in order for everyone to get on the same page.

Now that I’m leading a team, I’m always looking for developers who can take ownership of a technical problem and deliver a solution for it. Are you able to do that? If not, what skills do you need to acquire to be able to? (It might not be coding skills.)

This is not an attempt of a team leader with a development background not appreciating business analysts or trying to cut them out of the process. I’m saying that as developers, sometimes we use business analysts and QA testers as a crutch. If you’re able to deliver technical solutions without this support, you can take your team and career to the next level.





Team-based organizations vs. role-based organizations

Posted on April 7, 2018

I had a conversation with a friend of mine this week about how to structure teams in an IT department, the main question being do you align reporting structure by roles or do you have everyone on a team report to the same person regardless of role. It was a good conversation, but I kept thinking about it more for a couple days, so here I am writing about it.

Just to illustrate, here are pictures showing the two approaches:

Role-based org chart:
role-based org chart

Team-based org chart:
team-based org chart

(We were discussing IT departments with 80-100 people, with teams generally around 5-10 people. Things are a little more complicated in larger IT departments, but some of the arguments I lay out here likely still apply.)

Here is the question – if you have a team-based structure where inevitably several team members will be reporting to a manager who doesn’t come from a background in their discipline (e.g. BAs reporting a manager with a development background), how do you accurately evaluate, train, and mentor those team members?

I was say that those can be valid concerns, but I would also say that there are more important considerations.

One goal

The problem with the role-based structure is that now team members are forced to choose between two masters – their team leader and their boss. Many times everyone is aligned and this isn’t an issue. But happens when people have to choose? For example:

  • QA is behind, so the team lead asks if a developer can pitch in and help test. QA lead on the project is nervous because they feel like as a QA person they are responsible for quality, and they’re worried about getting in trouble with their boss if things get out the door with bugs because the developers that were helping with testing missed something.
  • Developers want to start writing more unit tests and the team collectively agrees, but the dev manager thinks unit testing is a waste of time.
  • Developers offer to automate some of the testing for QA so that QA can skip some really difficult, time-consuming manual tests. QA people are skeptical because they are sure that they’re boss will be OK with it.
  • The team feels like they can solve a certain problem faster by just coming together to solve the problem with less (or no) written requirements from BAs. BAs are worried that their boss will see the requirements and think they aren’t doing their job correctly.
  • Team members can’t agree on the best way to do something, and while a team leader would like to break the tie, team members won’t change because they choose to align with their boss instead.

Regardless of discipline, everyone on the team should have the same goal – delivering great software that solves the right problems. In a team-based structure, it’s much easier to put the goal over your role.

Communities of practice

The concern about people reporting to managers in a different discipline is a valid one. That being said, I’ve reported to several non-technical project manager types and it never bothered me. If you ask a team leader how well people on their team are producing, they’ll know who is good regardless of role. Also, if you manage people but you’re not a member of their team, how well do you know what they’re doing? (I have some people like this that report to me, and the answer is I honestly don’t really know that much other than things I hear second-hand.)

Also, when you think about the people who taught you the most in your career, how many of them were your direct manager? I’ve learned a lot from some of my managers, but probably more from other team members and other people at the company that I worked with. In many cases, employees are often better at their discipline than their manager (who doesn’t do it all day and has different kinds of job responsibilities as a manager).

One thing you can do is set up “communities of practice”, which are groups of people in the same discipline that get together to share ideas. This is a good idea anyway — it gives people an opportunity to learn how other teams are doing things so that they can learn from each other and collectively get better. I’ve seen this done for everything from QA, Scrum masters, architecture, automated testing, and various JavaScript frameworks.

Encourage learning

In my opinion, we shouldn’t just encourage learning, we should require it. Technology changes so fast, and it’s a challenge to keep up. I’m surprised how many companies don’t have any training programs for their employees.

If you want people to value learning, help people find learning opportunities, pay for them to attend conferences (and don’t make them use PTO), and even require people to spend a certain amount of time in training/learning as a part of their job (during work hours). Not only does this keep people up to date, good employees will appreciate the investment in their career.

Continuous learning is a good thing, and it has nothing to do with who someone reports to. So even if someone is reporting to a person who doesn’t share their discipline, they’re still honing their craft.

Standardization is the enemy of innovation

There is an idea out there that once we find out the “right” way to do something (whether it’s a certain tool, home-grown framework, template, etc.), we should roll it out to everyone so that everything will be done in a standard way that follows “best practices”. This reduces the chances of failure, and allows employees to be moved around to different teams and not face a large learning curve because things will be done in the same way. Most IT departments will “standardize” at some level (e.g. “we’re a .NET shop”), and things like that are good for consistency’s sake, but that’s not the level of standardization I’m referring to here.

The pace of technology is changing faster than ever, and the only way companies can keep up and survive is innovation. Striving for standard methods is the opposite of innovation — it tells people that they’re not allowed to find new, innovative ways to solve problems, because it’s more important to follow the standards, even if the standards are outdated or don’t work in their situation.

Often times the standardization comes from role-based managers who roll out their standards for how their people do their job. Not that there isn’t good information that comes from these managers, because often times the information is good. But every team is different, and every project is different. The way you test a high traffic public website is way different from how you would test a mobile app on multiple platforms, an internal app doing back-end transaction processing, or reporting solutions. Good teams will come up with innovative ways to solve problems, and will figure out what makes the most sense for their situation.

Putting teams first

One of my favorite principles from agile is the idea that you try to keep teams together, and then bring projects to teams (rather than assembling teams for projects). If your organizational structure mirrors your team structure, this gets a lot easier. If you shuffle your teams around, you inevitably have that one guy that is too important to take off support for a system, so he has to split time. You end up with systems that were developed by teams that no longer exist, so the support structure is missing. You lose all of the cohesion and camaraderie that a team builds up over time.

I’ve been on teams where the core of the team had been together for a long time (years), and those teams ran so smoothly. We all understood our process, we knew how each other worked, and we got really good at working together to solve problems with just the right amount of process. The team didn’t need a lot of oversight because we all knew the agreed upon way to work. But the best part was that these people became really close friends of mine, and I’ll still keep up with them even though I’ve moved on.

More of what works

There’s no one right way to do anything, and there will be situations where my arguments don’t apply. But no matter what your environment is, I always strive to do more of what works and less of what doesn’t, and we all need to keep innovating.





Impostor syndrome is bringing us down

Posted on January 2, 2018

I have a young daughter, which means that my life consists of the Frozen soundtrack being played relatively consistently in my house. The most popular song, “Let It Go”, it about a girl’s struggle to break free from having to impress people and put up a front to a place of empowerment where she can take on any challenge that comes her way and reach her limitless potential. They create these movies to encourage young girls to feel empowered and to feel that they can take on the world. Maybe the rest of us should start listening as well.

The fact is that so many people are struggling with impostor syndrome, or a feeling of incompetence, that you’re not good enough to succeed at something, or fear of being exposed as as fraud. This is stopping people from taking on new challenges and feeling empowered to succeed.

Don’t let them in,
don’t let them see
Be the good girl you always have to be
Conceal, don’t feel,
don’t let them know

We’ve all been in situations at times where we’ve been in over our head. It’s an uncomfortable feeling for sure. We’ve all had our shares of successes and failures. And yet we all woke up the next day with a new chance to make something of our day.

Think of the people that you know who are out there making a difference, succeeding in a new role at work, or having an impact in some way. I’m going to let you in a little secret – they’re all in over their head.

The fears that once controlled me
Can’t get to me at all

I always want to be in a place where I’m a little bit uncomfortable. It’s not always fun and maybe I lose a bit of sleep at night, but there’s no better way to grow. There’s nothing more satisfying then having to take on a challenge that you’re not totally prepared to handle and then finding a way to make it work.

But maybe more important is being able to not be a slave to fear. There is enough stress in life, the last thing I need to add is fear of failure. I run across so many people who are very talented but are too afraid to step up and take on a new challenge. I run into people who are afraid to have kids because they are afraid that they won’t know how to be a good parent.

Even writing that last paragraph makes me sad. We are sitting on a wealth of untapped potential. So many people could do something great if they just were willing to step out into the unknown. The world desperately needs people with enough confidence to step up and make a difference.

You may feel like this describes how you feel about yourself. You might think that the worst thing that can happen is failing. I think the worst thing that can happen is that you fail to even make an attempt. Sure, you will guarantee that you won’t fail, but you will be left to deal with your own inability to step out into the unknown.

It’s time to see what I can do
To test the limits and break through
No right, no wrong, no rules for me,
I’m free!

Imagine what your life would be like if you weren’t afraid of failing. Imagine if you didn’t care what people thought about you. Imagine if you looked at a mountain in front of you and saw an exciting challenge instead of an overwhelming obstacle.

Here I stand
In the light of day
Let the storm rage on

Life is hard, and downright scary at times, but we all are here for a reason and we can all make a difference, if we’re just willing to give it a try.





A WFH Retrospective

Posted on December 1, 2017

It’s no secret that communication is essential for successful teams. Most of us have left the world of high cube walls for open floor plans in an attempt to increase communication. So certainly the worst thing you could do is have nobody work anyone near each other, right?

But wait…

We’re very lucky to be in the 0.0001% of humans in history to have access to this thing called the internet. Thanks to the internet we have video chat, smartphones, Slack, and any number of ways to communicate without physically sitting next to someone.

My team has always been co-located in the same area of the same office (thankfully with no high cube walls), but we work with business people who are spread all over the country. I’m guessing that many of you work in offices where the business people you interact with are in your same office, so you spend a lot of face to face time with them. We don’t have that luxury for the most part, but yet we seemed to do pretty well over the phone.

People on my team were occasionally working from home for the various usual reasons (people coming to fix something, sick kids, car in the shop, etc.). I heard from several developers that they liked the amount of productivity that they felt they were getting when they worked from home, so we decided to try an experiment — what if everyone worked from home?

So we gave it a try. I had one stipulation – we all had to communicate just as well as we would in the office. There couldn’t be any “I’ll just wait until tomorrow when we’re in the office to talk about this.” We had to be willing to get on the phone or on video chat to work through issues. And you know what, it worked.

The next level

Since it worked so well when we tried it, we’ve implemented work from home Fridays. Other than missing out on the food trucks there really have been no downsides to this. We’re about a month in and we’ve learned a lot along the way.

Video chat really helps

If you’re having a remote meeting, video chat is really good. There’s something about seeing someone’s face that really makes a difference. There are people (not on my team) that I’ve been on phone conferences with many times and I’ve never seen their face. That just feels weird. I’m building software for these people and they don’t even know what I look like.

Video also keeps people focused… you can’t sit in a meeting and just work on other things the entire time if people can see what you’re doing.

There are many tools out there for doing video chat, but you can always use Google Hangouts for free. If you need something more, you can invest in a better video communication tool or outfit your conference rooms with video conferencing equipment.

Online work tracking

For years I used a physical board to track tickets in addition to work item tracking tools, but remote working sort of requires the online option. We had gone away from the physical board anyway before we started working from home. The online tools have gotten much better over the years (we use Jira, but there are many choices). We haven’t seen any downsides from going online only, we don’t have to spend hours putting tickets up on the physical board only to have people forget to update it, making it relatively worthless at that point until someone spends hours updating it again.

Overcommunicating

Like I said earlier, there can’t be any “we’ll just talk about this in the office tomorrow.” If you need to discuss something, discuss it over chat or on the phone.

This is where I was most pleased. It got to a point that Slack was blowing up so much that I felt like I almost had fewer distractions when I was in the office. People were complaining because I took a break for lunch and didn’t set my status on Slack to away.

Days when some people work from home are better now

Since we’ve gotten good at communicating, that has spread to when only some of the team is working from home. We used to have problems where the people in the office would have conversations and leave out the people working remotely. Now we’re remembering to include everyone, either by having the conversation on Slack or calling the remote people in. We’re just getting better at communicating overall.

So many perks

There are many obvious benefits to working from home. I don’t have to spend 1.25 hours in the car every day. I can eat lunch with my family. People can leave town Thursday night and work from another city on Friday. I can start at 7am and be done at 3pm if I want. It’s not a big inconvenience if I need to have someone come work on my house. I can quit early and just work part of the evening if I want. I’m way more relaxed and energized at the end of the day because I don’t have to end it with a drive through rush hour traffic.

It’s a pretty competitive market for developers out there, so if I can give someone a perk that is hard to find and it doesn’t cost the company anything, that’s really a no brainer. Honestly if I had to give it up at this point, I would really miss it. In fact, it’s Friday night and instead of being exhausted from a long day, I’m writing this post. Maybe that’s a coincidence, but it was a rough and tiring week and today was a really good day.

You can’t be a micro-manager

If you’re one of those people who feels that people are going to slack off if you aren’t staring over their shoulder to make sure they’re working, working from home isn’t for your team. But I would say that if you can’t trust your people to be responsible adults, you have a bigger problem (and maybe the problem is you).

This is a results oriented business. If I stare over your shoulder all day and you don’t deliver, that doesn’t do anyone any good. But if I find self-motivated people and give them a reason to work hard and they take ownership of a problem and deliver, then everyone is going to win. I know my people are getting things done, they’re asking me questions all day, and I can watch tickets move on the online board.

What’s next

Part of the reason that I wanted to do this was because I knew we might have remote developers at some point, and that has come to pass as I just got a developer on my team who lives in Louisiana to go along with our BA from Louisiana. I wanted to practice working remotely so that we could get good at it. Now that we’re good at it, maybe we work from home more than just one day a week. Maybe we actually start recruiting team members that work in other cities. There are many teams that are 100% remote and they make it work. I don’t know that we’ll go 100% remote, but who knows.

If you’re a co-located team that typically comes into the office every day, maybe give it a try. Pick a Friday and have everyone work from home for a day and see what happens. We like what’s happened on our team, and maybe you will too.





Moving past the monolith, Part 8 – Planning ahead

Posted on November 4, 2017

This is part of a series of posts about Moving Past The Monolith. To start at the beginning, click here.


Most of us are well-intentioned and never set out to create the giant monolith that weighs down the entire company, but it continues to happen everywhere. After making the mistake many times myself, I’ve realized that they only way to stop this from happening to be more proactive.

We need to discuss modularity when we start creating applications. We need to discuss it any time we move into a new set of functionality. We need to discuss it throughout the life of the application before we get past a point of no return. We need to be thinking now about how someone is going to need to replace the code that we’re writing today.

We can’t just write up a bunch of tickets and have a bunch of developers write code over many years and expect that we’ll end up with something modular. This will always lead to a monolithic application. The only way to avoid this is to make modularity an important design consideration.

This might mean changing how we design our domain models, creating copies of data that are sourced from a master copy, creating service boundaries between modules (either physical or logical), and possibly doing a little more work to protect your ability to change.

This is something your whole team needs to be aware of. Your leads might not know about that code that blurs the module boundaries until it’s too late and you don’t have time to refactor the code. Everyone needs to understand why you’re building modular code and how you plan on doing it.

My experience with modularity

After building many monoliths, I’ve been using these concepts on my current project, and we are reaping the benefits. We have many different deployable modules, and several different solutions (less compiling!). We have modules on very different deployment schedules — some deploy when needed, some deploy on regular schedules, and some don’t deploy at all. Some have modular code but need to be split out into their own deployments so that they can be deployed independently. Some of our code still feels monolithic — some projects are hard to change, take a long time to compile, and some of it is on our list of things to refactor, but since it’s not built in a modular way, do so is proving to be difficult.

I’m really excited about where we are headed, and I’m more confident than ever that we’re going to be able to build large enterprise applications without creating a monster or ending up with a giant .NET solution that takes 3 minutes to rebuild.

I would love to hear from anyone else taking this approach, and I would love to know how it’s going for you and what lessons you’ve learned. I imagine I will look back on this post a year from now and want to make a lot of edits based on things that I’ve learned. I’m OK with that, that just means that I’m learning, and learning is a good thing.

If you’ve made it this far, thank you for joining me on this journey! I hope that something in here will empower you to start creating more modular and maintainable software.





Moving past the monolith, Part 7 – Splitting up your client-side applications

Posted on

This is part of a series of posts about Moving Past The Monolith. To start at the beginning, click here.


JavaScript can be modular too! On the surface, everyone knows this, and frameworks like Angular even have the concept of modules. But even with Angular modules guiding you towards modularity, it’s just as easy to create a monolith.

If you’re truly building modular applications, consider breaking the modules up into their own deployable web applications. There are so many good reasons to do this.

  • This allows us to deploy (or more importantly, not deploy) each module independently! This is a huge deal! No more regression testing the whole application when you change one part of it.
  • We can use shared modules (which contain global UI elements, CSS, and shared JavaScript classes) to make sure that all of the modules have the same look and feel.
  • If When at some point you want to switch web frameworks (and you know you will — how many of you are stuck on an AngularJS monolith when you wish you could build new stuff in a newer framework?), you can start building new things a new way without having to rewrite the rest of the application.

I really can’t emphasize that last point enough. JavaScript fatigue is a thing, and JavaScript frameworks are coming and going out of style as a ridiculous pace. Someday you (or someone you want to hire) will need to maintain your application, and I’m guessing you would much rather do that in a “modern” web framework (whatever “modern” means at the current time).

Most of you will want to create one URL that users will go to access the application, but this doesn’t mean that you can’t deploy each module independently. Use a reverse proxy like IIS URL rewriting or nginx to set up routing rules that will redirect traffic based on a url to different hosted web sites. Reverse proxy routing is different than DNS routing (which just routes a domain or subdomain to an IP address), it allows you to route based on patterns in the URL (e.g. I can route http://mysite.com/posts and http://mysite.com/users to different hosted web sites).


Read the next post in this series, Planning ahead.





Moving past the monolith, Part 6 – Using package managers to share common code

Posted on

This is part of a series of posts about Moving Past The Monolith. To start at the beginning, click here.


In my last post, I talked about creating “shared modules” that contain code that is needed across modules. Most applications will have a need to have something like this, and it can be very useful.

There are two ways to consume the shared module. You could include it in with all of your other application code, and other modules can reference it directly. In some cases, this makes sense, but now you have a problem — anytime you change something in the shared modules and it affects the consumers, all of the consumers must be updated to handle the change. If the consumers are deployed independently, you might not want to have to change that much code.

The second way to is distribute the shared module through your package manager (NuGet, rubygems, npm, etc.). The beauty of using the package managers is that package managers can store different versions of a packages, so consumers get to decide when they opt into the changes. This gives you the freedom to change shared code, but not impact consumers that don’t want to accept the changes (e.g. legacy code or things that you don’t want to retest and deploy). All of these package managers allow you to set up your own server to host packages so that you can have your own internal package source that isn’t exposed to the outside world.

This can get a little tricky when the shared module changes involve breaking database schema changes. Things like this would force all consuming modules to get updated, but you probably knew you were getting that when you decided to make the schema changes.

It’s not always that easy

While this approach might seem simple and straight-forward, it actually has some quirks to be aware of. Here are some things to watch out for.

  • Be careful of adding dependencies in your shared module that are exposed to the consumers, because you’re effectively forcing those dependencies on your consumers. A classic example is when someone adds a reference to an IoC container to the shared module, but one of the systems consuming the shared module uses a different IoC container, so things don’t work.
  • Any time your dependencies are exposed to consumers, the shared module and the consumers are forever tied to the same version of that dependency. This means that a dependency version update in the shared module will force all consumers to make the same update, and consumers will not be able to update their versions unless the shared module makes the same update.
  • There is a difference between shared modules that are explicitly created for sharing between modules in the same application and shared modules that are meant to be shared across applications. In the first case, you’re more likely to accept the version coupling that I’ve talked about, but in the latter case, you really don’t want to introduce version coupling between shared modules and many different applications (especially if they are owned by different development teams).

Read the next post in this series, Splitting up your client-side applications.





Moving past the monolith, Part 5 – Minimizing sharing between modules

Posted on

This is part of a series of posts about Moving Past The Monolith. To start at the beginning, click here.


We’ve been talking about how you can break your application up into modules, which are groupings of functionality that can function (and potentially be deployed) as a semi-independent unit. In most cases, you’re probably still going to have a decent amount of shared code, database tables, CSS, and JavaScript code that needs to be shared across all modules.

I have no problem with the “shared module” that everyone ends up creating. This is a necessary part of every application, and by no means would I encourage you to copy and paste code. :) As always, there are some things to consider:

  • Are you putting something in a shared module just because you think it will be shared or because you know it will be shared?
  • These shared modules are for sharing within your application, never outside your application. If you need to share things with other teams, create services, database schemas, or something special for those teams.
  • Understand that every time you put something in a shared module, any changes to that code could impact any number of modules using it, which may involve you having to change, refactor, and deploy many other modules. The benefits will typically outweigh the downsides, but make this a conscious decision.
  • Pay attention to situations where you starting seeing so much related functionality in the shared module that maybe you need to birth a new module out of it.

One of the goals of modular software is making change, refactoring, and replacement easier. Shared modules can help you achieve your goals when used within reason, but make sure you remain aware of what’s going on so that you don’t end up with too much tight coupling.


Read the next post in this series, Using package managers to share common code.





Moving past the monolith, Part 4 – Using the Service Object pattern

Posted on

This is part of a series of posts about Moving Past The Monolith. To start at the beginning, click here.


One of the typical characteristics of monoliths are giant classes that are grouping otherwise unrelated sets of functionality. You may find this in domain model classes (the Rails “fat model” conundrum), or in “god classes” that typically end with words like “Manager” or “Logic” that just group methods that are related to some common entity in the system.

These super-large classes don’t provide much benefit in terms of shared functionality. Many times you have small groups of methods within those classes that call each other in a little mini-system. Sometimes you have methods that are used by several other methods, but in that case you don’t know what you’re going to break when you change them. In all cases, the code tends to be difficult to change because you don’t know what the side effects will be.

The Service Object pattern is one way to solve this problem (also known as “domain services” in Domain Driven Design). There are many articles you can read that explain this concept in depth, but I’ll explain how I’ve been using it.

The backend of pretty much every application has some sort of internal API layer that is exposed to outside consumers or the UI of the application. These may be HTTP services, message queues, or just a logical separation between your UI and your business layer. However this manifests itself doesn’t matter, what’s important is that you have some place when you have a set of queries or actions that can be called by a UI or some other caller.

This API layer represents the set of capabilities that your application can perform – no more, no less. This is a description of the surface area that is exposed to the outside world. This also describes the things that I need to test.

Let’s imagine that we’re writing an application to do bank account functions. We’ll assume for this example that I’m exposing these through a .NET Web API controller.

public class AccountController
{
    private IDepositService _depositService;
    private IWithdrawService _withdrawService;
    
    public AccountController(IDepositService depositService, IWithdrawService withdrawService)
    {
        _depositService = depositService;
        _withdrawService = withdrawService;
    }

    [HttpPost]
    [ResponseType(typeof(DepositResponse)]
    public async Task<IHttpActionResult> Deposit(DepositRequest request)
    {
        return Ok(await _depositService.Execute(request);
    }
    
    [HttpPost]
    [ResponseType(typeof(WithdrawResponse)]
    public async Task<IHttpActionResult> Withdraw(WithdrawRequest request)
    {
        return Ok(await _withdrawService.Execute(request);
    }
}

Let’s look at some of the characteristics of this controller:

  • The controller methods do nothing other than call the domain service and handle HTTP stuff (return codes, methods, routes)
  • Every controller method takes in a request object and returns a response object (you may have cases where this are no request parameters or no response values)
  • The controller is documentation about the capabilities of the application, which you can expose with tools like Swagger and Swashbuckle (if you’re in .NET)

Now let’s move on to the domain services.

Let’s say that I have a Account domain model that looks like this:

public class Account
{
    public int AccountId { get; set; }
    public decimal Balance { get; private set; }
    
    public void Deposit(decimal amount)
    {
        Balance += amount;
    }
    
    public void Withdraw(decimal amount)
    {
        Balance -= amount;
    }
}

My domain service looks like this:

public class DepositService : IDepositService
{
    private IRepository _repository;

    public DepositService(IRepository _repository) 
    {
        _repository = repository;
    }

    public async Task<DepositResponse> Execute(DepositRequest request)
    {
        var account = _repository.Set<Account>().Single(a => a.AccountId == request.AccountId);
        account.Deposit(request.Amount);
        
        _repository.Update(account);
        
        return new DepositResponse { Success = true, Message = "Deposit successful" };
    }
}

My domain service contains all of the code needed to perform the action. If I need to split anything out into a private method, I know that no other classes are using the same private methods. If I wanted to refactor how depositing works, I could delete the contents of the Execute() method and rewrite it and I wouldn’t have to worry about breaking anything else that could’ve been using it (which you never know when you have god classes).

You may notice that I do have some logic in the Account class. It’s still a good idea to have methods on your models that can be used to do things that will update the properties on the domain model class rather than just updating raw property values directly (but I’m not one of those people that says to never expose setters on your domain models).

I’m also using the same request and response objects that are being used by the caller. Some people like to keep the request and response objects in the controller layer and map them to business domain model objects or other business layer objects before calling the domain service. By using the request and response objects, I’m eliminating unnecessary mapping code that really has no value, which means less code, fewer tests to write, and fewer bugs.

I prefer to only have each domain service handle only one action (e.g. one public Execute() method). I’m trying to get away from the arbitrary grouping of methods in domain services where methods exist in the same class only because they’re working with the same general area of the system. You will have cases where you have multiple controller actions that are very much related and it will make sense to have multiple controller actions share a domain service. If you use common sense, you’ll know when to do this.

Testing this class is going to be pretty easy. I really only have to worry about 3 things here:

  • The input
  • The output
  • Any database updates or calls to external services that are made in the method

Not only that, since all of the logic I want to test is encapsulated in one class, I’m not going to end up with lots of mocks or having to split up one action into multiple sets of tests that test only half of the action. I also know that my application has a finite set of capabilities, which means that I have a finite set of things to test. I know exactly how this action is going to performed.

Reducing layers

Most applications tend to have layers. The typical example (which I’m using in my example) is when you have a UI layer that calls a controller layer which calls a business layer which calls a data layer which calls a database (and then passes information back up through the layers). If you were to draw this up as a picture, the API layer of most applications would look like this:

application layers (old way)

There’s a problem with this though. The picture clearly shows that the controller layer has a finite set of things it can do, but the surface area of the business layer is potentially much larger.

Some people will think of their business layer as another kind of API layer, with the consumer being the controller layer and other callers inside the business layer. The problem is that in most code bases, the business area has a very large surface area because there are many public methods that aren’t organized well. This is difficult to test because you don’t know how the business layer is going to be used, so you have to guess at write tests based on an assumption. This means that you’re probably going to not test a scenario that you should, and you will also test a scenario that is never going to happen in your application.

What this modular kind of code structure is emphasizing that our application is made up of a finite set of actions that take in inputs, modify state, and return outputs. When you structure your code in this way, your layers actually look like this:

application layers (new way)

Now my business layer has a finite set of capabilities to test, I know exactly how it can be used, and my code is organized around how it will be used.

What do I do when my domain services need to share code?

If my domain service objects are going to use all of these different request/response objects as inputs and outputs, what happens when multiple domain services need to share code?

In our codebase, we have “helper” classes that perform these shared actions (when I can’t put the code on the domain models themselves). A good example would a SendEmailHelper class that takes care of sending emails, which is something that many domain services might want to do.

There is an intricacy here to consider — if you split something out into a helper class, do you want to mock out that helper class in a unit test? There are times when you do and times when you don’t. If you’re sending an email (which interacts with an external SMTP server), you likely would mock out the SendEmailHelper in your domain service tests and then write separate tests for the SendEmailHelper. Sometimes you might have a helper class that exists because it’s shared code, but you want to be able to write unit tests that test the entire mini-system of your domain service action. In this case, it’s totally OK to new up the concrete helper class and use that in your test. Not every external dependency needs to be mocked out in a test, sometimes mocks are the wrong way to go.

My big thing is that I want the unit tests for my domain services to effectively test the spirit of what the domain service is supposed to do. I have had cases where I’ve run across code that was split out into so many helper classes (many of which were only used by the domain service) and unit testing becomes really difficult because your tests have so many mocks, and each test class feels like it’s testing only part of what the domain service does. If you run into this sinking feeling, maybe you should reconsider how you’re writing your tests or organizing your code.

Isn’t this the classic anemic domain model anti-pattern?

I don’t think we’re violating the spirit of the rule here. I agree that there should still be things that you put on the domain model objects themselves, such as validation rules (required fields, string lengths, other business validation rules), calculations (e.g. FullName = FirstName + ” ” + LastName), and methods used to modify properties (e.g. our Deposit() example).

This is a good example of using common sense, because thousands of Rails developers screamed at the thought of an anemic domain models and then ended up with fat models instead, which (IMO) is a bigger problem.

Object-oriented programming is not a panacea

Object-oriented programming is often talked about as the “best” way to write code, but that doesn’t mean that everything has to be OO. Procedural programming is often associated with negative things like giant stored procs and legacy VB codebases, but that doesn’t mean that all procedural code is bad. The approach I’ve outlined is still based in object-oriented programming, but it involves more procedural code and embraces the fact that our applications are a collection of procedures based around a rich set of objects. I’m doing this because it’s a conscious decision to move more towards modularity, maintainability, and easier testing.


Read the next post in this series, Minimizing sharing between modules.





Moving past the monolith, Part 3 – Think about how you share data

Posted on

This is part of a series of posts about Moving Past The Monolith. To start at the beginning, click here.


In my last post, I talked about how you can separate data within your application. Sooner or later, someone outside your team is going to need to access your data. How are you going to get it to them while maintaining the flexibility that you need to be able to change when you need to?

It’s unlikely that you’re going to be able to go on forever without anyone asking for access to your data. People usually need it for a good reason — your data has a lot of value, so you should be willing to share it. But you want to do it in a controlled manner. Here is my typical thought progression:

  • Can I create a schema that contains only views and stored procs that the other application will use?
  • If I have to give teams access to tables, can I limit what they have access to? How will I know what they have access to?
  • If someone needs to update data, does it really have to be through database calls or can you have them call a web service instead? If a straight data load makes sense, how do you make sure that this doesn’t adversely affect anything else that my application might do?
  • If someone asks to have read-only access to the database to write queries, are they doing it just for research purposes or are they going to write application code against those tables?
  • If BI or reporting teams want access to the database, can they write their queries in stored procs or views in a specific schema so that you know what they’re touching? (Especially watch out for SSIS packages that have custom SQL in them, it’s extremely difficult to figure out what is in those packages if you didn’t write them, and you won’t know if you’re going to break if you change something.)
  • How do I keep track of who has access to my database so that I can notify them when I’m going to make a potentially breaking change? (If you don’t know who has access, this will severely limit you’re ability to change because you will have no idea what you’re going to break.)
  • If another team wants to write application code against my data model, can they code against a copy of the data in their database which gets loaded from a batch job instead so that we both can maintain the freedom to change? Does it make sense to get the data from a web service instead?

Other people wanting access to your data is inevitable. How you manage it is up to you, but it’s important that you be proactive about managing everyone who is dependent on your data.


Read the next post in this series, Using the Service Object pattern.





Older Posts »
SERVICES
SOFTWARE SOLUTIONS
I have over 15 years of software development experience on several different platforms (.NET, Ruby, JavaScript, SQL Server, and more). I recognize that software is expensive, so I'm always trying to find ways to speed up the software development process, but at the same time remembering that high quality is essential to building software that stands the test of time.
PROJECT LEADERSHIP
I have experience leading and architecting large Agile software projects and coordinating all aspects of a project's lifecycle. Whether you're looking for technical expertise or someone to lead all aspects of an Agile project, I have proven experience from multiple projects in different environments that can help make your project a success.
PROCESS COACHING
Every team and every situation is different, and I believe that processes and tools should be applied with common sense. I've spent the last 10+ years working on projects using Agile and Lean concepts in many different environments, both in leadership roles and as a practitioner doing the work. I can help you develop a process that works best in your organization, not just apply a prescriptive process.
Have any questions? Contact me for more information.
PRESENTATIONS
Ditching the Office - How an everyday corporate development team turned into a remote working team
From Stir Trek 2018
From Stir Trek 2017, cbus.js 2017
Iteration Management - Your Key to Predictable Delivery
From Stir Trek 2016 and QA or the Highway 2015
From CodeMash 2016, QA or the Highway 2014, Stir Trek 2012
The Business of You: 10 Steps For Running Your Career Like a Business
From CodeMash 2015, Stir Trek 2014, CONDG 2012
From Stir Trek 2013, DogFoodCon 2013
(presented with Brandon Childers, Chris Hoover, Laurel Odronic, and Lan Bloch from IGS Energy) from Path to Agility 2012
From CodeMash 2012 and 2013
(presented with Paul Bahler and Kevin Chivington from IGS Energy)
From CodeMash 2011
An idea of how to make JavaScript testable, presented at Stir Trek 2011. The world of JavaScript frameworks has changed greatly since then, but I still agree with the concepts.
A description of how test-driven development works along with some hands-on examples.
From CodeMash 2010
From CodeMash 2010