Wednesday, July 05, 2017
Thursday, April 13, 2017
In Part 1, I wrote about how implementing shorter deployment cycle is imperative for companies like AutoX (i.e. companies like Ford, Toyota's and Airbus) and for PLM vendors (i.e. companies like Dassault Systèmes and Siemens PLM). And implementing DevOps practices is the way to achieve these shorter cycles.
In Part 2, I wrote about how to achieve the seemingly impossible dream of dream of major version upgrade PLM version in a Auto company in one month and minor version upgrades in a week. What features PLM vendors should add to support this kind fast deployment cycles ?
In this part, I plan to write what changes PLM Customers like AutoX (i.e. companies like Ford, Toyota, Airbus, etc) have to do in their way of working to achieve fast PLM updates.
DevOps for PLM is imperative for AutoX to reduce the maintenance, upgrade and enhancement costs of PLM at the same time taking maximum advantage of new PLM features in day-to-day work.
I am assuming you (Reader) are the AutoX company.
Point 1 : Understand that you are a 'software company' now. (whether you like it or not)
For you it is actually more complex situation than traditional software company because you have to 'integrate' software into your own workflow. So think about how you will manage source code of your software (configuration management), compiled executables/binaries, release cycles, code integration, feature/bug life cycles, version management, deployment management etc.
Even though you are a software company, you are not probably developing your own software product and you are not a company doing projects for others. You are somewhat like a 'systems integrator'. You have your own unique set of challenges. Unfortunately software literature is usually focused on 'products' or 'projects'. There are very few references available.
Point 2 :You will have to customize the PLM and other Enterprise software for your own needs. Out of The Box (OOTB) will not give you competitive advantage that you need.
- These customization will be done by your own team, PLM vendor, or some third party development company.
- You have to integrate code from multiple sources. These code-bases may be delivered at different intervals, with different technology stacks.
- These code bases will have complex dependencies (sometimes circular dependencies).
- Compiling these code-bases and deploying them in production is a complex task.
- Tracking the deployment metrics and production performance is required.
It is possible to apply the 'assembly line' concepts from manufacturing application (coming from Kanban, Toyoto Production System, Theory of Constraints etc etc) to this software assembly line and thereby improve the efficiency of this assembly line
Please share your feedback.
- Think about dependencies. Identify and break circular dependencies.
- Treat whole program as 'system' and apply 'systems engineer' concepts to streamline workflows
- Apply concepts like controlling WIP, reducing batch sizes etc. The features not yet delivered to end user is 'inventory'. Features underdevelopment is 'Work In Progress' inventory. Time Boxed Sprints of SCRUM is essentially controlling WIP and reducing batch size.
So where to start ?
- Define configuration management tools and practices.
- Which configuration management tool will be used in-house. Which tools will be used by your vendors
- Add every customization in configuration management. (including build scripts, database schema migration scripts, deployment scripts etc etc)
- Define how the code-base delivered by vendor, will be merged in your configuration management tool
- Define configuration management practices in such a way that you can identify what is changed between version easily.
- Mandate that the vendor has to deliver 'automated test scripts' along with source code (and not just test results).
- Major bottle neck in DevOps implementations is lack of automated testing scripts.
- If you need to test manually all new features/bug fixes, then the deployment cycle (i.e. your batch size of features increase a lot)
- Overall not having 'automated tests' reduces efficiency of DevOps implementation
- Define Integration Pipeline
- How the code will be merged ?
- How it will be compiled and executable created ?
- How it will be 'staged' on a test environment ?
- How automated tests will run ?
- How automatic deployment will happen ?
- Every single step in integration pipeline will be 'managed' in your configuration management system.
- Once code delivered by vendor (or released by your in-house team), entire integration process should take less than 1 week.
- Define Integration and release cadence.
- Make sure integration and release cycles are as short as possible.
- Make sure that 'deployment downtime' should be as less as possible. Use newer cloud deployment tools like creating on demand Virtual machines, using Docker containers etc.
- Define a 'sane' agile change management process.
- Make sure 'change management' is part of Integration pipeline.
- When projects/companies move from Water Fall to Agile (especially with code developed by vendor), biggest confusion is about managing 'change requests'.
- Measure everything in production
- Use tools like 'fluentd', 'TICK stack', ELK stack to collect metrics from production deployments.
- Create dashboards which show this production metrics to your team.
- Share the dashboards with your development team. Let them see how the applications they developed are performing in production.
- To facilitate this data collection in production, define design/coding practices which will push the data to these systems.
- In case of mixed deployment (part desktop, part server) define and implement how 'automatic' deployment/upgrade of desktop parts will be done along with server parts
- PLM systems require integration with CAD/CAM/CAE applications and customization of those applications.
- DevOps implementation will require pushing changes in production for these applications as well. An automatic update mechanism will be of tremendous help.
- Building metrics and bug/crash reporting inside these customization will increase the efficiency even more.
Please share your feedback.
Tuesday, January 17, 2017
In Part 1, I talked about how implementing shorter deployment cycle is imperative for companies like AutoX (i.e. companies like Ford, Toyota's and Airbus) and for PLM vendors (i.e. companies like Dassault Systèmes and Siemens PLM). And implementing DevOps practices is the way to achieve these shorter cycles.
My colleague Sreekanth Jayanti shared this comparison that illustrates the benefits of 'shorter deployment cycles'
Now in this part, I intend to explain how to achieve the seemly impossible dream of major version upgrade PLM version in a Auto company in one month and minor version upgrades in a week.
Lets continue with the example of AutoX (an automotive company implementing PLM) and PLMX (the PLM vendor). To achieve this Dream, AutoX have to change its way of working and PLMX have to change its licensing model, to some extent even business model. Lets start with what changes PLMX have to do.
Usually deploying/upgrading a new version of PLMX will require
- Creating new version setup
- Re-applying all customization to newer version (e.g. change web pages, ui customization, workflow changes, upgrade plugins etc) and Test
- Test all existing integrations work with newer version. If they don't then fix the bugs, remove deprecated APIs etc and make it work.
- Upgrade the database schema
- Migrate the data to newer schema.
- Upgrade documentation etc
To achieve all these steps in a 'short cycle', PLMX has to make many changes in its way of working and licensing model.
PLMX should License the tools developed for in-house cloud deployment and upgrade to customers
For many years, PLMX have acted as if difficulties of 'deployment' and 'upgrade' are not really its problem. It is the problem of 'AutoX' (i.e. problem of customer). This thinking is now changing (but slowly than expected). Major driver for this change is 'cloud deployment' of PLMX. Now PLMX is managing their own 'production cloud deployment' and now it is facing all the deployment and upgrade problems of AutoX. Obviously PLMX is better equipped to handle these challenges and it is developing tools to simplify these tasks. AutoX (i.e. customers of PLMX) requires exactly same kind of tools. Today PLMX is not licensing these tools to their customers yet. And that is the first change PLMX has to do.
PLMX should License automated regression test suite for public interface to customers
The major driver in achieving 'shorter' deployment cycles is 'automated tests'. There is NO way AutoX can achieve One month deployment if it relies on manual regression testing. Also AutoX will not able to write completely new automated tests for every upgrade cycle. It has to 'reuse' the tests already written. It will make AutoX life lot easier if PLMX includes its 'automated tests' as part of PLMX license. AutoX can then change these tests to as per the customization that AutoX has done. When a new release of PLMX is available, AutoX has to take new set of JARs, JSPs and Unit Tests from PLMX, re-apply the customization that AutoX has done to this set and then test new version with its own customization in its own test environment.
Even better if PLMX shares automated regression test suite on a sharing platform like Github
I will dream some more and assume that PLMX has put its 'public test suite' on a sharing platform like Github. Now AutoX just 'clone' the unit tests from Github and change it to test its own customization. AutoX is now contributing its own tests (which illustrate some bugs) to this sharing platform. All customers of PLMX are now sharing automated unit tests and effectively making their 'production deployments' faster.
PLMX should develop tools for 'incremental' migration of data
PLMX is already providing some tools to manage the database schema changes. However applying these 'schema changes' to production databases is messy and time consuming. When AutoX is migrating its PLMX back-end database to new schema, invariable issues are detected and 100% data is not migrated in 'first attempt'. So incremental data migration tools are critical. 2nd attempt should just migrate the 'failed' data and should not start from scratch again.
PLMX should develop tools/recipes for cloud deployment using virtualization and containerization of its components
Today PLMX comes with an 'installer' where IT admin has to 'click' next and select various options to setup the newer version of PLMX. To some extent PLMX is now using virtual machine images for test setups. But there is no containerization yet. Chef/Puppet recipes are not available yet. Automatic provisioning and horizontal scaling of PLMX deployment is still not easily possible.
PLMX should start using scalable, distributed data stores like Hadoop, Apache Cassandra
PLMX back-end is still traditional RDBMS (e.g. Oracle database or Microsoft SQL Server). Both Oracle and Microsoft SQL server now support 'horizontal scaling/scale out/distributed database architectures'. Also open-source data stores like Hadoop, Cassandra are also providing high availability and performance. PLMX back-end should be scalable providing high availability without single point of failure.
Of course all these steps will help PLMX in its own 'cloud deployment' of PLMX application. It will take at least 3 to 5 years for PLMX to achieve all these steps. However, PLMX will need a 'marquee' customer to like AutoX to try out all these tools in a production scenario. And that is Part 3 of this series.