Engineering Ideas #8
ORM dilemma, curse of Lisp, coping with complexity of systems, leadership of ICs, software development top mistakes, productivity = prioritisation
ORM Hate
Martin Fowler warns against swinging from using an existing ORM fully back to rolling your own thing:
For much of the 90's I saw project after project deal with the object/relational mapping problem by writing their own framework - it was always much tougher than people imagined. Usually, you'd get enough early success to commit deeply to the framework and only after a while did you realize you were in a quagmire - this is where I sympathize greatly with Ted Neward's famous quote that object-relational mapping is the Vietnam of Computer Science.
A conclusion from the referenced Ted Neward’s writeup:
Just as it’s conceivable that the US could have achieved some measure of “success” in Vietnam had it kept to a clear strategy and understood a more clear relationship between commitment and results (ROI, if you will), it’s conceivable that the object/relational problem can be “won” through careful and judicious application of a strategy that is clearly aware of its own limitations.
Fowler suggests the moderate approach:
This is where the criticism comes that ORM is a leaky abstraction. This is true but isn't necessarily a reason to avoid them. Mapping to a relational database involves lots of repetitive, boilerplate code. A framework that allows me to avoid 80% of that is worthwhile even if it is only 80%. The problem is in me for pretending it's 100% when it isn't. David Heinemeier Hansson, of Active Record fame, has always argued that if you are writing an application backed by a relational database you should damn well know how a relational database works. Active Record is designed with that in mind, it takes care of boring stuff, but provides manholes so you can get down with the SQL when you have to. That's a far better approach to thinking about the role an ORM should play.
On Scaling Mental Models
Hillel Wayne comes with an interesting idea about why codebases in “cool” programming languages don’t scale:
Programming is an expression of how we think. We take the basic instructions and build abstractions over them. The more power you have in building abstractions, the more you can tailor those abstractions to your exact needs and exactly how you think. […]
This compounds. If you’re working by yourself, you can shape your code and environment to reflect your mental model. This makes it easy to quickly write terse, simple, maintainable code. But it’s hard for other people to work with you. They don’t share your mental model, and they don’t come in with all your initial assumptions. This is somewhat addressable if you all start working on the project together but falls apart when people join on later.
I cannot say I fully agree with this as an explanation of why technology written in these languages gets rebuilt with more mundane languages later (Wayne gives an example of Reddit website being rewritten from Lisp to Python).
There exist huge codebases in Scala with hundreds or even thousands of engineers working on them (Twitter), by all accounts is a very powerful language which allows shaping very advanced abstractions.
The reasons for rewrites from Lisp might be just its poor runtime performance or that it’s hard to find engineers to write in it. These may be related to the “expressivity” factor, but nevertheless are different points. There may be problems with scaling Lisp in production, but they may be more prosaic than “hard to scale mental models”, which may actually be a non-problem.
STELLA: report from the SNAFU-catchers workshop on coping with complexity
By David Woods, summary by Adrian Colyer. (SNAFU stands for “Situation Normal: All Fucked Up”, i. e. an accident, a disruption.)
On the ability to cope with complexity and why one should be really careful adding that to a system:
Woods’ Theorem: As the complexity of a system increases, the accuracy of any single agent’s own model of that system decreases rapidly.
Remember that ‘agent’ here could be a human – trying to form a sufficient mental model to triage an incident for example – or a software agent.
On similarities between failures in complex systems (Sidney Dekker writes almost the same in “Drift into Failure”):
Section 3.4.1 in the paper lists some interesting features that all the anomalies had in common, including:
Arising from unanticipated interactions between system components
Having no single ‘root cause’ (during my holiday reading I discovered by accident that Tolstoy has a lot to say on this topic in ‘War and Peace’ – try reading the first couple of pages from Vol III, Part I, Chapter I for example!).
Having been lurking in the system as ‘accidents waiting to happen’ for some time
Being triggered by only slight differences from normal operating conditions
Intially being buffered by system mechanisms designed to cope with partial failures etc., but eventually exhausting those mechanisms
How Can I Prepare to Eventually Move into Engineering Management?
Another great post by Gergely Orosz, providing a mental map about how to transition into management, or, I would say, grow as an individual contributor as well.
It’s hard to cite anything specific, I’d rather suggest you read the whole post. Here are some of the topics:
Lead a project and focus on completing it.
Look for bottlenecks the team is facing that slows work down and fix them.
Pick up ungrateful tasks that no one else wants to do.
Give credit to others, selflessly.
Start mentoring other developers.
Give candid feedback and don't shy away from difficult conversations.
Network with other professionals within or outside the company. Connect with non-engineers, learning about their work.
Observe and learn from your managers.
The last point reminds me of a Babak Nivi’s idea about leverage from here:
Look at who is getting leverage off of the work that you’re doing. Look up the value chain—at who’s above you and who’s above them—and see how they are taking advantage of the time and work you’re doing and how they’re applying leverage.
Software Development’s Classic Mistakes
Construx Software, Steve McConnell’s company, conducted a survey about the most frequent mistakes on software projects. Here are the top 14 most damaging mistakes by exposure (frequency multiplied by severity):
Unrealistic expectations
Overly optimistic schedules
Shortchanged quality assurance
Wishful thinking
Confusing estimates with targets
Excessive multi-tasking
Feature creep
Noisy, crowded offices
Abandoning planning under pressure
Insufficient risk management
Heroics
Shortchanged upstream activities
Inadequate design
Lack of user involvement
The common themes that I see here:
Wishful thinking: mistakes 1, 2, 4, 5, and 10.
Underprioritized planning and quality: mistakes 2, 3, 5, 9, 10, 11, 12, and 13.
Requirements and product mistakes: 1, 7, and 14.
Developers unable to focus: mistakes 6 and 8; cf. “Deep Work” by Cal Newport.
The white paper includes more elaborate descriptions of mistakes which are also interesting.
On our to-do list: an interview with Google’s productivity expert
Laura Mae Martin goes straight to the point:
— What’s one thing people should start doing to manage their workload more efficiently?
— Determine your top priorities for the quarter, and write them on a note on your desk. If you’re asked to do something that doesn’t align with one of those priorities, say no. The more you say no, the more chances you have to say yes to something that really matters.
Thank you for reading thus far!
You can subscribe to new Engineering Ideas via RSS (e. g. using Feedly) or e-mail.