Ruby on Rails and the Single Responsibility Principle

Several people took issue with my last post about Ruby on Rails saying that my example violates the Single Responsibility Principle. Here’s where I stand on this.

The SOLID principles are principles, not hard and fast laws of software development. When applying the SOLID principles, common sense should rule. There are times when violating the textbook definition of the principles is OK (regardless of what language you are using). The ultimate goal of all design principles should be reducing the cost of change. Reducing the cost of change comes in many forms:

  • Being able to get things done faster because you have to write less code
  • Being able to get things done faster because it’s easier to figure out code and what you need to change
  • Being able to get things done faster because code is wrapped in tests so you can change it and not worry about breaking things
  • Being able to get things done faster because you have automated tests which decrease the amount of manual testing that you or your QA team needs to do
  • Being able to get things done safer by having abstraction layers in place so that changing code in one place doesn’t break something else
  • Having well defined requirements so that you don’t waste time on rework
  • Having business sponsors available to answer questions quickly
  • Many more reasons that I’m not thinking of

With this in mind, all of my design decisions, including how I apply the SOLID principles, need to reduce the cost of change.

Also, keep in mind that the SOLID principles were written with static-typed languages in mind (C++, Java, .NET). While a lot of the same principles apply in dynamic languages, you can’t assume that everything is the same. In fact, the “I” and “D” of the SOLID principles arguably don’t apply to dynamic languages at all.

Several people said that my Ruby class violated SRP because it mixed class behavior, database mapping, and validation all in one class. Again, this is where I would take issue with your application of SRP. Here is the Ruby class in question:

class User < ActiveRecord::Base
  belongs_to :user_status
  has_many :roles, :through => :user_roles
  validates_length_of :name, :maximum => 100
  named_scope :active, :conditions => ["user_status.id = ?", UserStatus::ACTIVE]
end

On my last .NET project, I had domain model classes like this:

public class User : Entity
{
    public virtual long Id { get; set; }
    [Required("Name"), Length(50)]
    public virtual string Name { get; set; }
    public virtual IList UserRoles { get; set; }

    public virtual bool IsInRole(string role)
    {
        // code here
    }
}

I was using Fluent NHibernate as my ORM on this project, so I had a database table that looked like this object. So in this case, I have class behavior (IsInRole method), validation (the attributes on the Name property), and implied database mapping (using Fluent NHibernate conventions). How is this any different than my Ruby class?

Some people might argue that the textbook definition of SRP would say that I shouldn’t put validation attributes on properties like that. What is your other alternative? Write a line of custom code in an IsValid method or in another validator class, I assume. Is that reducing the cost of change? If I write custom code, I have to write a test for it. If I write boilerplate stuff like “[Required(“Name”)]” or “validates_presence_of :name”, that is so simple that I’m not going to write a test for it because I don’t think I gain anything from doing so. If I can add a whole bunch of behavior in just one line of code, then potentially violating the textbook definition of SRP is OK with me because it’s way easier to do it that way and it just makes sense.

Back to my Ruby class. My class knows how to load, save, and delete itself. In .NET, I would say this is a bad thing. First of all, I wouldn’t be able to stub out the database in a test (which I can in Ruby because I can stub out anything). Second, if you have multiple entities that do things a certain way (e.g. multiple entities that need to be cached when loading, saving, or deleting them), you would have no easy way to do that without duplicating some code (in Ruby, I can create a module and include it in a class, and in that way I could override a method like “save”). Third, I would have a direct dependence on the database, which violates the Dependency Inversion Principle (which doesn’t apply in Ruby, because all I’m depending on is some class out there with a method with a certain name, as opposed to .NET where I’m depending on a method in a class in an assembly in a .dll with a specific version). So as you can see, your domain model takes on a much different shape in Ruby vs. .NET. Both ways have their pluses and minuses, but each way is the best way for the language that you’re using.

There is one area where I see an interesting clash of Ruby and SRP. Because I don’t have to use dependency injection and because I can stub out any method in a test, it makes sense to put all methods relating to an entity object into the entity class itself. This just makes sense, and TDD would lead you in that direction. Doing this will lead you into having some really big domain model classes though.

Would this violate the textbook definition of SRP? I would say yes. You run into some of the issues that you have with big classes (harder to read). In .NET, we would argue that we should move a lot of this behavior into domain service classes, which would be smaller and easier to use in other classes (by taking the interface into the constructor of the class that was going to use it). In .NET, you have to pull things out of the entity object in order to be able to stub it out in a test, so at that point you might as well go the rest of the way and make small classes. But if you had the flexibility of being able to stub out anything in a test and you had Ruby-style mixins in .NET, would you really do it this way? One could argue that the .NET way of doing things in domain services is an example of Martin Fowler’s anemic domain model anti-pattern.

In both languages, you are making a tradeoff. In .NET, we are saying that testability is more important that potentially having an anemic domain model and having model behavior scattered around in domain services. In Ruby, we are saying that we would rather have all of the behavior in the entity class itself because it makes sense to have it there (we don’t have to worry about testability because everything is testable), even though we could end up with some big classes. Neither way is necessarily wrong, it just depends on the language you’re working in. In some cases, the .NET and Ruby communities have generally agreed on their own “best practices” and ways of doing things, and that’s just how people do things in those languages, often because the languages and frameworks make it easier to do it that way.

In summary, it depends, and use common sense. Remember what your goal is — to deliver working software. Your architecture is not the deliverable, and it’s not what makes money for the business. Flexibility, design patterns, abstraction layers, and the like often come at varying levels of cost. It’s up to you decide if it’s worth it in your situation.