I think about digital transformation as having three pillars: infrastructure, integration, and knowledge.
Most vendor and customer views of transformation are framed from an infrastructure perspective, which focuses on technologies and products. However, transformation cannot be supported by infrastructure alone.
With business agility as the objective, transformation efforts must also establish a "single source of truth" that houses knowledge to inform the organization’s business processes and decisions. In other words, there should be one version of each piece of information that exists within the organization, gathered in one place, to eliminate inconsistencies and drag information out of inaccessible locations.
Housing knowledge in a centralized, accessible, and usable form contributes heavily to increased business agility. Think about it this way: your car has one engine that powers your car. Having a separate engine for each tire would be a nightmare! Assembly would be more challenging, performance may suffer, average miles-per-gallon would plummet, and maintenance costs would skyrocket. For similar reasons, you should not let your business decisions run on multiple different sources of knowledge that can act differently. Doing so hurts your business and impedes any digital transformation effort.
So why does this still happen? Time and financial constraints enable—or even encourage—short-sighted design and implementation that fail to create that single engine. This leads to significant technical debt and its ill effects on an organization’s ability to realize the business agility that actually produces desired outcomes.
See our blog post: Knowledge Management, Digital Transformation: How to Design for the Knowledge Lifecycle
The big obstacle in all of this is the fact that operational decision-making, an organization’s biggest asset, is frequently left to the IT staff’s discretion in implementation. This is because the knowledge that informs decision-making is too often buried in technology that cannot be consistently accessed, maintained, or improved. Far too often, this information lies within databases, application programming languages, front-end applications, and scripts.
Since knowledge can be so deeply buried and inaccessible to users, successful knowledge management requires collaboration. Collaboration allows organizations to standardize data, information, processes, decisions, and tribal knowledge into a single source of truth. It allows organizations to codify knowledge into a transparent and shared repository of organizational knowledge where it is centralized, visible, and accessible to internal and external stakeholders.
Who does this collaborating? Effective knowledge management requires close cooperation between the business and IT sides of the organization, but we recognize that this is easier said than done. Therefore, we recommend that you appoint a team of knowledge engineers to shepherd your knowledge management effort.
A knowledge engineer could be a requirements analyst who takes on new technical skills or a technical analyst/developer who learns more about business requirements. The goal of establishing this role is to bridge your business and IT houses so they can develop a common language and achieve the collaboration needed for knowledge management that serves business objectives.
See our blog post: Knowledge Engineer - The New Sheriff in Town
Ultimately, your knowledge engineers have the authority to direct the most effective business solution vs. the best technical program. They help to create not only high-performing, modern applications, but also the features that cater specifically to the needs of the business. This is because successful collaboration creates a centralized base of knowledge that not only stores a single version of each piece of information to consistently inform decisions, but also gives function and meaning to the knowledge base that answer to common questions and objectives.
In 1913, Henry Ford introduced the assembly line, where he combined interchangeable parts with subdivided labor to reduce the production of automobiles from 12 hours to 2 hours and 30 minutes per vehicle. Similarly, infrastructure DevOps and related technologies are ultimately building an IT assembly line. Many industries have seen the benefits in terms of speed, increased quality, and throughput.
Thanks to this DevOps IT assembly line, IT staff can now quickly assemble and deploy a high-quality, capable compute platform. However, without the same discipline surrounding knowledge management, we lose out on flexibility. Like a real assembly line, DevOps produces individual updates quickly and specifically, all feeding into one central product. We need the same to happen for our knowledge engine product, so we can add new knowledge, edit existing knowledge, and access knowledge to inform decisions.
The assembly line structure gives us the ability to quickly add and edit both information and its functions. Without it, additions and changes would be slow and difficult, and it could take a long time to see the effects. We want to be able to see individual updates to information in our knowledge base quickly, so we can apply them to real situations quickly as well.
See our blog post: Improve IT Processes: 4 Focus Areas for DevOps Automation
For example, if we needed to become compliant with the General Data Protection Regulation (GDPR) for our EU web visitors, we would want to quickly update the information in our knowledge base with the individual requirements within the GDPR. These include requirements for cookies notifications on websites and data collection restrictions. Each of these requirements could be entered into our knowledge base and have functions added to them that dictate how our organization collects customer information moving forward. The assembly line process enables us to update only our data collection rules and have the effects ripple out from one update (one piece contributes to the whole product) instead of having many people working with many different versions of the same information in different places for an inefficient process and inconsistent or incomplete result.
Implementing a knowledge management discipline removes knowledge from its shackles (spreadsheets, databases, etc.). It centralizes it in a repository, known as a knowledge management system, that can act quickly, confidently, and transparently. If an organization takes this approach, then the dependency on existing experts is lessened and it makes it easier to change a decision or process in a single location that is then consumed and used by many.
The entire organization develops these to address major processes. These should be broad enough to build consensus across the organization.
For example, these general concepts would be agreed upon and established:
Build domain models that encapsulate business operations to compartmentalize knowledge into standardized, smaller, manageable, extensible, and versionable processes and decisions. These domains exist under the broader concept of operations.
For example, these smaller specific concepts would be agreed upon and established:
Each of these is a smaller domain under the concept of operations, but if we tried to put them in one big model, it would become impossible to change just one small piece. If the change can be isolated to a single domain where its impact can be measured and implemented without introducing risk to the other fully functioning and working models, then we've succeeded. Smaller domains that are focused are easier to maintain when it comes to making changes.
Add a security layer that exposes an API framework for inter-system communication and a standardized interface that is utilized by all stakeholders (internal and external). This protects the knowledge you’ve standardized and established in the above steps and makes it available to users to inform decisions and processes.
You do not want just anyone to have access to certain information, so you gate it off and protect it. Placing an API gateway in front of your knowledge management system, allows you to grant, restrict, prioritize, measure, and ultimately control who you share your knowledge with.
Use this distinctive method of developing software systems that focuses on building single-function modules (i.e. Rules-as-a-Service) with well-defined interfaces and operations. This breaks a currently monolithic application into individual functional pieces, providing greater agility and the foundation for gradual adoption of modern software architectures, including process and decision management technologies.
Stick to these rules when using a microservices model:
Microservice design avoids mixing one knowledge fact or function with another. We say that microservices must be composable for this reason. For example, a set interest calculation performs only interest calculations. Determining what the interest rate should be another, separate microservices function, not rolled into the interest calculation function. But by composing them together, we can take in data, determine an interest rate, and calculate the net effect.
A benefit of establishing these discrete microservices functions is that when we want to change any of the underlying components that led to the calculation of this net effect, it should not affect any of the others.
Without knowledge management, your organization is like a car that uses a different engine for each thing it does. Even if you upgrade the finishes, it doesn’t change the fact that your car is difficult to maintain, heavy, inefficient, and even impractical. Similarly, your organization needs to run decisions informed by an authoritative single source of truth. And the path for creating a single source of truth that maintains a central record of knowledge from around your organization and works to inform decisions looks like an assembly line that incorporates the steps of establishing a knowledge management discipline.
Try our four steps above for establishing a knowledge management discipline in your organization. Sign up below so you don’t miss out when we publish more on this subject.