Tuesday, 1 December 2009
A powerful combination of a metadata registry, collaboration framework, rich UI and broad range of "asset" discovery filters allows users to quickly document there enterprise applications in a centralized knowledge base.
This knowledge base can be used to immerse those new to your projects on the key assets, encourage re-use, reduce duplication and waste and reduce the risks to your projects when key staff leave.
Tuesday, 27 October 2009
Be it SOA or WOA, WS-* or REST, these strategies offer a way to vastly increase the agility of the information technology framework within an organization and yield many downstream benefits. Like all systems, natural or otherwise, Service Infrastructures are built on roots which if ignored will lead to a weak, under-nourished and poorly performing system. The important “roots” in question here is the metadata that underpins the service framework, that describes the messages and services that nourish your Enterprise.
Root or Branch?
When people think of service-orientation, the focus naturally falls to the services and processes that they wish to create, compose and automate. Our metaphor of roots being metadata might also describe the orchestrations and services as branches and leaves. This straightforward approach of focusing on the “big things” seems obvious yet risks ignoring the much larger mass, of albeit smaller entities, the metadata that supports them. For every service or process, exists many hundreds of other smaller components. Each piece of metadata a potential asset or a hindrance.
Figure 1 (left/above) depicts the landscape of service-oriented design and the relative size of the problem posed by the metadata layer. Much time is spent creating this root ball yet its existence largely hidden as it is often below the conscious-level of the knowledge workers that busily build out the service infrastructure. This is ironic as the cost of building up this mass of information and the implications it has on the larger system make it one of the most valuable resources the I.T. organization possesses (if used effectively). If ignored it can multiple the cost of service delivery, greatly extend the complexity of the service and orchestration layers, and drastically reduce the overall “agility” of the whole service infrastructure.
How these negative effects permeate through the service infrastructure is a simple as can be, yet happen slowly, quietly and unseen in the day to day activities within most infrastructure projects. When written down and exposed this causal-chain is obvious which makes it even more surprising that it precipitates the failure or undermines much value from countless project investments:
- The direct cost of duplication (lack of re-use)
- Fragmentation of architecture (leading to complexity)
- Increased complexity slowing the development system (compromising agility)
- Loss of agility leading to slower development cycles, increased costs and overall failure to deliver proposed benefits of service-orientation
The problem starts innocuously enough... a developer either under pressure to meet a deadline or oblivious to the assets available around them creates a new message form to represent a business concept. This form is similar but critically is not the same as other forms of the concept and messages used elsewhere. Initially no symptoms are observed but the first impact to the service initiative has occurred. That is the increased costs as our developer spends time thinking and building something that already existed. Hard to measure maybe but it is a tangible cost and the tip of an ongoing series of costs that permeate from that one event. By developing that duplicate asset, the need to maintain it has also been created. Whereas the act of creation was a one-time event, the process of maintenance, additional documentation and complexity introduced lives on for many years.
Fragmentation and Complexity
The next symptom likely occurs at the process layers where various services and messages are being combined. Having similar but not identical forms of a particular data or service concept leads directly to more complexity in the form of transformations and other interface code. Message transformations make service orchestrations far more complex than they need be, magnify the amount of code involved, and therefore significantly increases complexity which in turn bloats the testing and maintenance headaches. An obvious effect of increased complexity is slower implementation and update cycles. However the the fragmentation that was introduced also directly erodes the agility of the whole infrastructure by making it harder to combine services together due to the need for complex handling of similar but different metadata.
In short, increased size, complexity and fragmentation of the metadata layer drives additional costs into the development, deployment and maintenance of service-oriented solutions and critically reduce the agility of the service and process layer. More metadata translates into a larger quantity of required code and other artifacts which in turn leads to higher development and maintenance costs. Multiple entity representations (or fragmentation) leads to higher process complexity. Fragmentation and higher complexity leads directly to less agility. Less agility means longer implementation cycles and undermines the ability of the business to respond to its changing requirements.
A Better Approach
The solution to these detrimental effects is simple to state, harder to achieve but starts with providing “extreme transparency” into the assets that make up the service infrastructure. The use of metadata registries, knowledge organization systems and collaborative tools, ideally integrated together, provide a firm foundation on top of which to build a methodology that optimizes the use of supporting metadata. The benefits of such tool come down to a single word: “visibility”. By allowing that earlier developer effective insight into what existed, the duplication and fragmentation most likely could have been avoided. The result being less code to maintain. The services derived from that now shared asset would be more easily combined, composed and re-used. The complexity of those process compositions would be less. The cost of development and maintenance also reduced.
However, no amount of “visibility” will solve the problem completely, there also needs to be a shared taxonomy by which assets are classified and a methodology to categorize new assets, and also pro-actively look for opportunities to reuse and consolidate. Building re-use into the delivery teams goals from the outset establishes awareness. Monitoring re-use by deep dependency analysis ensures that these goals are understood and progressed.
Practical Steps to Service-Oriented Agility:
- 1.Create a centralized knowledge base of service and metadata assets.
(You need a place where assets can be documented and easily located.)
- You can use disparate tools, open source, or look for a tailored solution
- A wiki and a process to encourage documentation and knowledge sharing is a good start
(The process of discovery is aided by having mechanisms to “tag” or describe assets in a uniform way thereby aiding discovery and reuse)
- 1.Thomas Erls' “Inventory Patterns” provides a good starting point
- 2.UDEF and other semantic models are an emerging technology that may help
- 3.Look at one of the canonical message libraries as a potential base to build upon
(The old project managers' adage “what gets measured gets done ...” comes in handy here)
(Duplication and fragmentation is a practical reality no matter how good your methodology so build in a process for dealing with it)
- Regular “asset” reviews at both the service and metadata layers
There are multiple solutions already in place that assist an organization in maintaining a “map” of the operational service layer and managing the process of taking a service through the testing phase into production use. However these tools tend to ignore the issue of coherent metadata management during design and build, linking the service and process layer to the assets that support them and the process of collaborating to enrich the description of these assets. A home-grown combination of service registry, wiki, other group-ware may suffice but the more tightly integrated the better.
Adopting a taxonomy helps an organization relate assets to business concepts and more easily share ideas between teams of knowledge workers. Many different approaches have been written about and can be used in individually or in combination. The aim being to provide a framework that allows each asset to be classified in order to promote its visibility when it is needed, i.e. the developer looking to wrap a legacy stock control system should easily be able to find existing assets associated with the “inventory” domain as it is understood by that specific organization. It could be as simple as “tagging” assets with well known keywords (e.g. inventory) or by using more sophisticated ontology approaches.
Measurement is key to understanding the operational effectiveness of the methodology allowing you to learn and fine-tune. Having a system of deep-dependency analysis that can report on asset reuse will allow you to find areas of concern and highlight examples of success thereby allowing you to evolve your own best practice that fits your organization.
Finally, implement a process of frequent review of assets (by service domain for very large organizations). Delegate this process of review to your architects, analysts and senior developers and goal them to optimize the asset base by aggressive pruning and refactoring. You do this in the knowledge that assets that add mass to your metadata roots, add costs to your processes as well. There is no such thing as benign metadata, only metadata that is productive and metadata that hinders.
Interesting readings on related subjects ...
- Adjoovo Spaces – a collaborative, service-oriented, metadata registry
- Thomas Erls' – SOA Design Patterns (inventory patterns)
- Canonical Message Libraries and Ontologies
Thursday, 22 October 2009
Monday, 12 October 2009
- it can inspect a wide range of technical artifacts and extract metadata from them
- it creates relationships between the metadata that can be navigated and reported on
- it allows you to "annotate" or "enrich" the metadata in a non destructive way