Conway’s Law: Does Your Organization Structure Make Software Security Even More Difficult?
These days I find myself in a lot of meetings where people talk about things like risk management and compliance as well as software security. These meetings made me reflect on how and why secure development programs are successful in organizations.
When we created SDL at Microsoft, my team was part of the Windows Security Features Development organization. Trying to understand secure development was one of our roles and initially the smallest part of the team. But secure development was part of the product engineering organization, so the approach we took – pretty much from day one – focused on training, motivation and opportunity for them. engineers who wrote the code to design and develop secure software. We started with training and motivation. Over time, we have added more and more authorizations in the form of very specific secure development tools and guidance.
What we haven’t done is put a lot of emphasis on compliance or testing after the fact. SDL was mandatory, but our approach, even when doing penetration testing, was to use it early on to look for specific design issues. (This was actually a contradictory design and code review, although we called it penetration testing.) We also used penetration testing later in the development cycle to confirm that the developers followed the process, applied their training, executed the tools and fixed any issues they encountered. reported tools.
We had security folks to work with the development groups, but we made it their role primarily to provide advice on threat modeling and help troubleshoot tough issues – not to check on the developers. Because the process was mandatory, we wanted to confirm that it had been followed, but we tried to do it with automation that could be built into tools and build systems so that a single database query tells us all places where developers had not gone through the process or met the requirements.
As a result, the developers realized fairly quickly that product safety was their job rather than ours. And instead of having twenty or thirty security engineers trying to “inspect (or test) the security in” the code, we had 30 or 40,000 software engineers trying to to create security code. It made a big difference.
Back to risk management and compliance. Early in my professional career, I came across Conway’s Law. Conway’s Law says that “organizations that design systems … are compelled to produce designs that are copies of the communication structures of those organizations.” It is normally interpreted to say that the structure of systems is the same as the structure of the organizations that create them. For software development (from Wikipedia): “…the software interface The structure of a system will reflect the social boundaries of the organization or organizations that produced it, through which communication is more difficult.
The interaction between development teams is not the same as the interaction between a development team and a security team. But thinking about Conway’s Law, I wondered if software security assurance teams that are not part of a development organization could be doomed by their organization’s social limits to trying to achieve l software assurance with after-the-fact inspections and testing. . If you are part of a compliance or audit or inspection team that is organizationally separate from development, the natural approach may be to let developers create software as they create it, and then verify it afterwards. to see if it is secure. This approach conforms to the security model as an external compliance function. But from a secure development perspective, this is a flawed approach.
This is really a difficult approach to get to work, as it means developers (and the security team) only find out about security issues once the software is roughly ready to ship. This approach, at best, makes it difficult and expensive to correct errors and increases the pressure to “ship now and accept the risk”. In this template, you say you’ll fix security bugs in the next release – and hope no vulnerability researchers find them out and no bad guys exploit them, in the meantime. Not good for the safety of the product or the customer. Not good for the reputation of the company either. And this situation can also be bad for developer morale. I remember before the inception of SDL when the Seattle Times ran front-page headlines after vulnerabilities were discovered in a new version of the operating system. My security team was unhappy, but the development staff were very proud of the company, and you can believe they noticed the headlines too!
I’m not saying that the only way a software security program will work is for the software security team to be part of the development organization. But I’m saying that a successful software security team should understand how the development organization works, work cooperatively with the development organization, and focus on enabling them to create secure software as part of their creation task. of software. That’s why I keep coming back to Conway’s Law. Much of software development is about communications. How the different organizations within a software developer communicate is a key factor in the successful creation of new secure products.
This focus on empowerment involves a commitment to developer training, tools, and guidance, and an approach to compliance that relies on artifacts of the development process rather than after-the-fact efforts. Especially today, with development teams using agile or devops approaches and feeling pressured to deliver in hours or days rather than months or years, this really is the only way software security can work effectively. for an organization.
Copyright © 2018 IDG Communications, Inc.