Fleet Management, of which Strategic Sourcing is a core part, is an integrated set of actions, which occur in a rational and logical manner, with the overall objective of attaining lowest Total Cost of Ownership (TCO). Key issues in fleet management involve capital commitments and management, as well as operating effectiveness and cost. Fleet asset utilization is not typically tracked or measured, which leads to unwanted outcomes, such as having more vehicles than necessary, additional operating and maintenance costs and not always having the right vehicles for the jobs they are needed to do. Additionally, fleet costs are usually fragmented and a rarely captured in total, which leads to problems in trying to adequately and accurately assess operating efficiency and evaluate out-sourcing opportunities.
The first step for true optimization is getting a good handle on the existing fleet in terms of its make-up, utilization and operating cost, reviewing the administrative and operating practices related to procurement, operations, maintenance and disposition, as well as determining replacement scheduling. The foundation is based upon the following three areas:
- Strategy (replacement scheduling, outsourcing/insourcing and fleet organization)
- Operations (vehicle pooling, maintenance & repair, inventory management, fuel management)
- Administration (Standards & specifications, fleet utilization, budget & cost reporting)
The areas to explore the fleet management practices:
- Fleet inventory (including but not limited to manufacturer and model year, type, location, VIN #, GVWR, acquisition price, options purchased, lease payment, annual operating and maintenance costs, sale price if retired, auction fees and class – how it’s used)
- Equipment Utilization – Miles, hours or both on equipment where there may be two measures of utilization
- Fleet “spend” at invoice level and at options level if available.
- Current agreements and in progress negotiations
- Current leases, short term rentals, and ownership models
Fleet Rationalization, Utilization and Fleet Mix – Once the standards and specifications process has taken place, putting rigor and focus in the area of rationalization and utilization brings value and savings to the company and fleet. The goal of this component of the process is multi-dimensional:
- Ensuring that the proper utilization targets by class and location (e.g.,: metro v. rural) are set and used to reduce the number of low-use vehicles in the field
- Rationalizing the fleet based on job function and job assignment.
- Developing a fleet policy that optimizes the use of pooling vehicles, how and when to use short-term rentals and take home vehicles.
- Identify fleet operating needs that may include needs for surplus vehicles including seasonal work requirements, construction projects, regulatory mandates, etc.
Focus on the 80/20 rule when it comes to prioritizing fleet opportunities. Develop standards and specifications for the portion of the fleet that can be standardized and will provide the highest value/impact, such as passenger vehicles, SUVs, LD and MD trucks aerial and digger derricks. Utility and construction equipment is often overlooked
- Fuel – In most cases, not incorporating the sourcing of bulk fuel (v. fuel management services) as a part of any fleet sourcing engagement. Past experience has shown that this exercise returns almost no incremental value and usually devolves into an exercise around sourcing transportation from supplier fuel racks to client bulk tank facilities.
- Maintenance & Repair – Achieving the lowest TCO for fleet, maintenance and repair is an integral component of the equation. Inherently maintenance and repair costs will decrease as an output of developing the standards and specifications and replacement schedule process. Other areas should also be evaluated, such as opportunities for network consolidations of maintenance and repair shops, etc.
- Determining a “Levelized” Replacement Schedule - Developing a “Levelized” Replacement Schedule is a key concept in improving fleet management and obtaining benefits from strategic sourcing. Sharing the information with both internal finance and external vendors and suppliers is instrumental in planning for future fleet acquisitions and capital needs as well as structuring multi-year deals.
In summary, maximizing fleet effectiveness depends on managing it like a business, in an integrated and holistic fashion, across two major dimensions.
- FLEET OPERATIONS – Operating revenue, Operating costs, Contribution margin, Productivity metrics and measures, Performance metrics and measures
- FLEET ASSET MANAGEMENT – Fleet sizing, Standards and specs, Strategic sourcing, Life-cycle management, Maintenance and repair, Disposition management
Achieving supply chain excellence is complex and challenging, but success in achieving supply-chain driven competitive advantage enables superior customer service, profitable revenue for growth and significant increase in shareholder value:
- Supply chain assets and inventory usually comprise at least half of all non-store based assets
- Supply chain activities typically account for as much as 40 – 70% of operating costs (including procurement and markdowns)
- Scientific Retailing: Overview of the High Performance Retailing framework, demand and supply value drivers
- Inventory Management: Inventory Management is the conductor of the symphony for Retail Supply Chain execution. It is critical for customer service since Inventory management is what initiates all merchandise movement and controls the timing within the supply chain
Some of the statements from retailers across all kinds of products:
- “Assisted Inventory Management (AIM) helped us exceed our inventory-turn goal, making us the leader among national drugstore chains in this important productivity measure. We achieved inventory turns of 5.0 times for the year, up from 4.6 times in earlier years.” – CVS
- “Positioned among the best in retail, our supply chain helps drive sales, reduce costs and ensure the availability of products our guests most want and need.” – Target
- “We completed the conversion of each of our operating divisions to a common technology platform with greatly enhanced inventory management tools, permitting more sophisticated inventory planning and more precise by-store inventory allocation.” – Saks
The three main components of the Inventory Optimization program address both the process and physical infrastructure of the supply chain.
- IM Process:
- Addresses end-to-end inventory management built on two core processes:
- Foundational for continually replenished basic merchandise: Periodic automatic replenishment, long life, stable supply, short lead time to continually meet normal demand
- Highly Variable typical of merchandise with high demand spikes or problematic supply: Demand characteristics are promotions, fashion, short life and seasonal while supply is typically private imports and private label
- Network and Flow Strategy
Network Optimization starts with establishing a vision of alternative flow paths and ends with a full evaluation of end-to-end physical supply chain and a recommended distribution network strategy.
- Assesses merchandise flow paths to provide revenue growth, minimize supply chain costs and support overall inventory strategies.
- Determines alternative distribution strategies including buildings size and location, transportation strategies, inventory deployment strategies, and benefit based business cases.
- Store Operations
- Determines store level inventory processes that maximize the customer perceived in-stock (several studies show 40-70% of outs occur due to store defects).
- Design and implement a well-defined process for store operations related to receiving, shelf stocking, perpetual inventory accuracy and plan-o-gram maintenance.
- Organization & Labor Planning
- Life Cycle Management
- Shelf Replenishment
- Data Integrity Maintenance
The idea is to push operations from
- Stores Ordering for basic merchandise to Automatic Replenishment Approach which is centrally maintained and helps with enhanced High Performance forecasting and allocation abilities
- Store Reviews ( All replenishment orders to supplement simple forecasting & ordering logic) to Exception Only Reviews. No store review for standard items and examples of exception reviews: items with high inventories, poor service levels etc.
- Limited Standards & Policies (In-stock policies and Service levels) to Standard Policies Across the Supply Chain. This is through reliable & repeatable inventory management processes and uniform service standards based on merchandise goals and category/SKU profitability
Recently I was on the panel of a CXO discussion around how to optimize costs and increases business productivity, when someone from the audience asked about Lean Six Sigma and its relevance, especially in today’s economy.
They asked is it Six Sigma that has been more effective or have Lean principles helped more? And how exactly do they differ? That set of dialog encouraged me to write this post. There is always debate about Lean and Six Sigma being so close that practitioners love to dichotomize in their thinking.
So what is Six Sigma?
- A Metric? - Less than 3.4 defects per million opportunities of product produced/ service rendered
- A Vision? – Six Sigma is an overall strategy to accelerate improvements in processes, products, and services
- A Value? – Strive for continuous improvement in all activities
- A philosophy? – A proven “pursuit of perfection” business initiative that creates breakthroughs in profitability, quality, and productivity
Six Sigma practitioners follow these tenets as a business philosophy:
- If something cannot be measured, we really do not know much about it.
- If we don’t know much about it, we cannot control it.
- If we cannot control it, we are at the mercy of chance.
Six Sigma started in the manufacturing industry with emphasis on management of efficient processes, efficient management of people, dedication to measurement systems, etc – mostly Operational Excellence. But it became apparent that business success was more than the absence of negatives (defects, delays, cost overruns). Six Sigma then began to encompass positives like customer loyalty and delighters in new products. From operational excellence, Six Sigma has moved towards Customer Intimacy and Product Leadership value disciplines through its DFSS / DMADV tools. There is always debate that Six Sigma does not go that well with a Innovation focus. But all said and done the tools offered are used everywhere in different flavors and different terms:
- SIPOC – A top level process mapping tool to document a process in the context of suppliers who provide inputs which are transformed into outputs for the customer.
- Cause & Effect Matrix – A process of identifying problems, finding their causes, and creating the best solutions to keep them from happening again (fishbone diagram).
- Failure Mode & Effects Analysis (FMEA) – A tool used to identify ways the process can fail, estimate the risk of the failure, identify causes of failure, prioritize actions to reduce failure risks, develop control plans to prevent failures
- VOC – The “Voice of the Customer”; the customer specifications/ requirements that dictate acceptable and unacceptable outcomes and drive actions.
- VOP - The “Voice of the Process ”; the companies processes doing what they need to produce products/ services.
- CTQ – “Critical to Quality”; characteristics that significantly influence one or more of the customer requirements.
Of course, recently Six Sigma has begun to be used in the IT industry as Six Sigma for Software.
And what is Lean?
A philosophy that shortens the time line between the customer order and the shipment by eliminating waste (non-value-adding activities). This philosophy is based on the following principles:
- Value – what the customer buys
- Value stream – how value is delivered
- Flow – putting value added steps in sequence. The “flow” or “value-stream” perspective represents a shift from vertical to horizontal thinking. Flow is enabled when materials and processes are standardized across the supply chain to reduce complexity.
- Pull – triggering flow from the customer needs. E.g. have only projects in IT that the pipeline can take i.e. Demand Management.
- Perfection – continuous improvement
- Any activity that increases the market form or function of the product or service. (These are things the customer is willing to pay for.) For .e.g in the Airline industry – actual flying time or in the Healthcare industry – diagnosis/treatment.
Non-Value Added = Waste
- Any activity that does not add market form or function or is not necessary. (These activities should be eliminated, simplified, reduced or integrated.). For e.g. in the airline industry – lining up to check –in or in the Healthcare industry – sitting in the waiting room waiting for an appointment. There are specific categories of waste that Lean targets:
- Excess (or early) production – Overproduction
- Inventory – documents, forms, supplies
- Waiting - e.g. delay in obtaining an appointment or test results
- Transportation (to/from processes)/ Motion – for e.g. leaving exam rooms for equipment, chart forms
- Extra Processing like inspection
- Underutilized people / resources
The way I look at it is the Lean is the management philosophy and Six Sigma is a great set of tools that help you chart your path. You have got to use the Six Sigma to first reduce variation and then deploy Lean management to take your processes to a newer level altogether. Some thoughts to leave you with what I always do
SIX SIGMA is about supporting different situations with different and specific tools:
And LEAN is about looking for efficient solutions and reducing waste:
Ever since I worked at American Express and I still hear from folks that “oh these credit card companies make 4% of the transaction dollars”, I feel like documenting the whole flow of this industry and see the economics behind this. The way the transactions work are very well explained on this virtual school link: http://virtualschool.edu/mon/ElectronicProperty/klamond/credit_card.htm
The mix of payment instruments evolves from the dynamics between customers, retailers and issuers – cash, checks, credit cards, debit cards, fleet cards, mobile payments, biometric payments, etc. From the retailers’ perspective, the range of instruments that retailers choose to endorse depend on the costs and benefits of each – with remaining transaction values as below:
For credit card transaction processing industry in particular, the value chain of this industry is as below:
But the current trends in this industry are as below:
- Merchants, like Starbucks, have expanded pre-paid and store-card operations with payment processors, bypassing high cost clearing/ settlement networks
- A consortium of major banks, such as BofA, Chase and Citigroup, have started or purchased a rival clearing / settlement network
- Game-changing mobile and internet initiatives, like GSM and Paypal, have expanded into the traditional payments marketplace and captured a small but growing market share
- New lower cost alternatives, such as RevolutionMoney, have successfully attracted merchants in direct competition with traditional providers and established a low cost credit card alternative
- Core banking providers have developed integrated payment solutions embedded in their core banking applications
- Interchange is the fee Processors pay to the issuing banks via VISA/MasterCard and these rates are set by VISA/MasterCard through their associations. Typically interchange rates are based on a % of volume for credit cards & a rate per item for debit cards. This is evolving as new players are jumping in.
Plus add to this the mobile payments with companies like Google Wallet jumping into the fray. The mobile ecosystems consists of the usual players- retailers/merchants and financial institutions but also adds handset manufacturers and network operators.
Due to these platforms that needed built, the adoption varies in geographic areas. Currently the percentage of people that currently use a mobile phone for Banking Transactions at least once a month (according to EDC-GSMA Mobile Financial Services Survey):
So the title of this blog is so uninviting and overflowing in people emails that I will have to do Push marketing to make people read this .But with all our clients in almost all industries – from hospitality to financial services to retail – I never leave some strategic meeting where people wonder about the true strategic value of Big Data and get questions like “Is it really that new? “You know us oldies in business and technology, we have seen it all and someone just came up with a new buzz.” “What is this hoopla about the elephant called Hadoop?”
Well, like all things in life there are always perspectives and point of views. The usage of a process or technology to effect the top line growth and operational efficiencies is up to the culture, intent and execution mastery of organizations.
So Big Data from social media impacts a lot of business functions but greatest is in the front end of any business – your customer relationship management (CRM) processes and systems. It implies a fundamental shift in the way organizations interact with prospects, customers, employees, partners and other stakeholders. Leaders must take the first critical step of changing their mindsets and revising some long-held beliefs about building and managing customer relationships.
The traditional 4 Ps of marketing (famous framework by Jerome McCarthy) has a new dimension – PEOPLE. The technology has enable people to talk to anyone within the organization or outside.
- Outside the organization there are Customer Peer – Customer peer conversations (Opinion sharing, Discussions, Idea sharing, Complaints, Questions, Jokes, Gossip, News, Etc.) for Information sharing, Relationship building. These are “Off Board” Outlets – (e.g. Facebook, Twitter, YouTube, etc.) Public Internet Outlets that organizations do not control, but can choose to participate in
- At the company – customer touch points there are Customer – Employee conversations (call centers, chats, forums, Sentiment Monitoring, Brand association monitoring, etc. for Information sharing, Relationship building. These are “On Board” Technologies – that can be integrated into your public facing website to engage your constituents. Examples include discussion forums, ratings and reviews, idea solicitation communities, blogs, etc
- Within the company walls there are employee – employee conversations (Opinion sharing, Questions, blogs, Etc.) for Information sharing. Hopefully nothing like NSA goof-up. “In Board” Technologies – that leverage social media techniques to facilitate collaboration and knowledge sharing between an organization’s internal employees.
So Big Data especially in the Entertainment & Hospitality business is surely evolving and rapidly for usage. Hotels, travel agencies, airlines, vacation clubs, and other industry players are unlocking the power of Big Data to dramatically improve products and services, thereby enhancing their competitive position and benefitting customers.
- Enhanced revenue management: Hotels recognize that data analytics are helpful in establishing the optimal price for rooms and ensuring that as few as possible are empty. Hotel chain Marriott takes this approach further, using big data for price optimization in restaurants, catering, and meeting spaces too.
- Better relationships with hotel guests: When customer data is aggregated, instead of being fragmented across a hotel’s various divisions, the analytical insights lead to better marketing and customer service.
- Targeted Marketing Promotions and Targeting is being focus even more to enhance extra marginal revenue.
But a major caveat to using all this is – data quality. I was on a Consulting panel recently where a good friend mentioned that companies are like true organisms – with a brain, muscle, fat etc. But the data is truly the blood that keeps this organism alive and running. I loved this analogy and carried it further in my head. Bad data can spoil the functions. Also polluted data (aka too much alcohol in the blood) can be devastating. Some threshold is OK. But that’s why we have data czars (aka police) catching folks doing DUI (aka management making wrong decisions under the influence of bad data). Now with sudden infusion of more diluted blood into the stream, how can we use the RBCs (red blood cells) properly. The elimination of a lot of White Noise in this big data to get to the little clusters of information wealth is where value is. The things to note on Big Data are:
- User-generated content is often triggered by emotion
- Amplified via the “viral effect”
- Impact cannot be stopped or undone
- Does not follow the conventional rules of commerce
- Forces companies to act in shorter cycle times
- Fast and truly global
A lot of Health Plans can generates incremental prevention savings by improving existing business practices, increasing the enforcement of claim payment policies, and developing new, robust solutions to increase claim payment accuracy.
Health claims functions everywhere face a similar set of challenges which inevitably lead to claims overpayments. Claims overpayment is one driver of increased medical cost that clearly can be controlled. In this world of Obamacare and the uncertainty associated with it, a lot of Healthcare companies are honing up this capability.
The typical methodology for calculating incremental prevention savings is structured around the establishment of a savings baseline using the four following inputs:
- Payment Ratio Report: To calculate a ratio that can be applied to a denied claim in order to derive the expected value of the denied claim (i.e. to determine the value of the claim, had it been paid). This is generally done by extracting all finalized paid claim lines over a 12 month period. Then one has to group the data at the claim type level (Facility/Professional) and calculate the ratio of the paid amount (net of any member liabilities) to the billed charges.
- Denials Baseline: To determine the value of historical denials associated with a particular payment integrity initiative prior to prevention implementation. This establishes a savings baseline that the company seeks to exceed through implementation of robust prevention measures.
- Leakage Baseline: To determine the value of claims inappropriately or inaccurately paid associated with a payment integrity initiative prior to prevention implementation.
- Denial Rate: The Baseline Denial Rate represents the percentage of the total suspect population that the Client would expect to deny for a particular claims payment integrity initiative prior to prevention implementation. This is done by dividing the value of the Denials Baseline by the value of the total suspect population for the initiative.
Over the years working with Loyalty programs with so many companies in the Retail sector, I feel I can summarize some observations:
- True customer loyalty is created when the customer becomes an advocate for the organization
- Rewards alone don’t generate loyalty. If a loyalty marketing program is just about earning points you end up buying loyalty not earning it. The loyalty is to the program not the product or the company.
- Rewards-only programs can be easily replicated by the competition, will quickly be commoditized and become a defensive play that no competitor can afford to unwind.
- Loyalty can be attained, but the organization has to work at it, continuously, and it will not possible with all customers.
- These is NO One-Size Fits-All Loyalty Program
- A win-win relationship must be established, and this cannot be accomplished if both parties cannot realize benefit. The two poles must be attracted to each other.
1. Compelling Value Proposition
- That which provides the customer with a tangible benefit if he or she decides to join a benefit program.
- Leading organizations adapt the value proposition for different segments of clients.
- Customer satisfaction is the degree to which customer feel their needs are met.
- Short-term perspective, very much based on the transaction with the customer.
- It is a feeling of connection to, and belief in and enterprise and its proposition, created by a “feel good” factor from interaction that lead to continued relationships.
- Loyalty is ultimately the crucial measure and it is more difficult to achieve than satisfaction.
- A customer can be dissatisfied despite being loyal.
- Loyalty can only be created on the basis of trust and repetitive positive experiences over time.
- The pinnacle of customer loyalty is where the customer acts as an advocate for the enterprise.
In the last years, many banks talked a lot about the importance of customer knowledge but only few of them have put successful actions behind their words. Companies still struggle with the basics of revenue growth areas:
- Customer Segmentation: Who are my customers, and how do they differ?
- Differentiated Treatment: How should I treat each customer segment?
- Optimization: How can I optimize treatment decisions to maximize value at an individual level?
The ability to classify or cluster customers / prospects based on certain business rules or inherent customer data behavior, pattern using advanced statistical modeling tools and techniques. To effectively use the gold mine of customer information, banks must develop at the same time the capabilities to aggregate, analyze, and use the customer data. And the best way to develop these capabilities is to create a specific unit at Headquarter level / enterprise level. Let’s call this unit “Marketing Factory”.
- The first is create an integrated view of each customer: marketing analytics achieves this goal developing superior data management capabilities
- The second goal is understand and predict customer behaviours: marketing analytics achieves this goal
- Developing propensity score
- Realizing segmentation and profiling analysis
- Realizing analysis of customer profitability and long term potential value
- Developing analysis on customer satisfaction and loyalty
- Develop marketing and sales dashboard
- The third goal is to provide insights that directly improve sales effectiveness. Marketing analytics achieve this goals:
- Identifying relevant commercial events and related offer
- Defining next product to offer for each customer
- Identifying the most profitable combination of customer segment/channel/product thanks to optimization tools
Today I was chatting with the global CIO of a fortune 100 company and just discussing how the alignment of his initiatives was going. He had some interesting analogies of how the business conducts the IT aspects of its business. He said, “If you run a small business and you buy MS Excel, it doesn’t mean that you can manage cash flow better. Accounting is has a process to it, a language, etc. that is key to understand before you do things on XLS.” He added, “Its like my daughter buying an expensive digital camera – doesn’t mean the pictures will be better. You need to understand the basics of field of depth, lighting, speed, apertures, before you can leverage that tool to your advantage”.
The debate between TOOLS vs PROCESS and what is more important happens at every strategic meeting and decision making juncture. The third leg of the stool – PEOPLE – is always assumed present or can be brought in easily.
Processes and tools go hand in hand, so the question again is which one comes first though- the chicken or the egg conundrum. It all depends on the industry you are playing in, the position you are in and so most importantly which CAPABILITIES you need. Technology is ever evolving, and with tools resulting from technology, one can argue that tools must lead the way for the activities we perform. But a good product, for example, has a limited life span in the marketplace. A good product development process, however, enables a company to create appealing new products over and over again. The alignment of processes and tools, is about Efficiency - it is all about HOW the organization should be doing what it decides to take on. For this companies need to think in 3 dimensions:
- Differentiation “on the outside”—They need to have a clear view of what makes them unique—product, sales, service, brand, or business model. They need to deliver a consistently positive experience for customers in each market segment.
- Simplification “on the inside”—They need simplicity in everything they do and this means standardized or componentized internal products, processes, and systems, with scalable and repeatable business models across the enterprise.
- Execution mastery—They need to prioritize execution as a core capability with the right leadership skills, culture, and change and risk management.
As more and more companies embark on historical looking metrics to gauge performance or future looking predictive analytics to make savvy business decisions, the debate on what to measure is often always on in the c suite.
The importance of measurement is widely understood to try to effect the right behavior. Data translates into information, which finally morphs into knowledge or wisdom that can be used by the organization to create some sustainable competitive advantage. But before we explore why we need to measure and what we need to measure, it’s good to understand the different nuances of measurement systems:
- A Measure is a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of a product or a process. It is a single data point (e.g., number of defects from a single product review).
- Measurement is the act of determining a measure.
- A Metric is a measure of the degree to which a system or process possesses a certain attribute.
- An Indicator is a metric or series of metrics that provides insight into a process, project, or product.
The use of metrics or scorecards should encompass the following objectives:
- Verify achievement of deliverables associated with the initiative/project.
- Behavior Modifier - Verify achievement of financial gains anticipated from the initiative/project.
- Cause and Effect Relationships -Verify benefits achieved were a result of the efforts of that particular initiative/project.
- Accountability for results – Make sponsors accountable for results within their areas.
- Enable reuse of processes, models, etc. for future initiatives.
Some of the good principles while designing these metrics:
- At Level 1, you need to restrict the number of KPIs at each organization level to 10
- These should be linked to strategy
- The organizational structure is guiding for KPI breakdown, with special “perspective” reports
- Selected KPIs must be valid, simple, measurable and controllable
- KPIs must be structured in a logical, mutually exclusive, breakdown structure and should consolidate upwards
- Define clear and structured ownership of KPIs to avoid local optimization
- High quality of KPI structure is crucial for organisational acceptance and needs to be prioritized during KPI design
- KPIs are designed to govern results on group level. Governance culture must be in line with the governance structure on which the KPI design is based
I was working with some digital marketing folks and they have agencies doing the websites and mobile apps for them. The discussion with business on trying to get to short time-to-market always leads to how IT and agencies are building the websites. “Agile” comes up without fail. I have written a bit before about Agile Methodology and received feedback from so many readers.
Before analyzing the points of the Agile Manifesto in detail, it is important to consider the last sentence. The Manifesto does not state (as an example) that “responding to change” is important and that “following a plan” is not important. This is a common misinterpretation. Looking more closely, it states that both items provide value, although “responding to change” provides more value than “following a plan.” In other words, it is important to follow a plan, but it is even more important to respond to change.
There are several different flavors of Agile Development that I wrote in details about – Extreme Programming (XP), Crystal by Alistair Cockburn, Scrum by Ken Schwaber, Feature Driven Development by Jeff DeLuca, Dynamic Systems Development Method. But the Agile themes and principles are somewhat uniform:
- Welcoming change: Embrace change in order to promote faster delivery of value to the customer and, ultimately, a superior and more creative solution.
- Deliver working software early and often: Deliver working software to the customer as early and as often as possible.
- Simple design (YAGNI): Add only what you need to the system. YAGNI = You Aren’t Going to Need It.
- Pair programming: Developed code by having two developers working on a single computer with one being a developer who thinks tactically about the method being created, while the other thinks strategically about how the method fits into the class.
- Continuous integration: Integrate software changes into the evolving solution as quickly and continuously as possible.
- Close customer collaboration: Work closely with the customer to ensure that their concerns are incorporated into the systems development process.
- Measure progress through working software: Measure progress by measuring the number of required features, or user stories, that are actually working in the application. Maintain constant pace. Work a reasonable schedule with no “heroic” peaks.
- Continuous improvement. Consider what is working well and what is not working well—and then adjusting the process accordingly.
- Test-driven development: Test early and often. The test is used to drive design and programming.
- Continuous Integration: This can occur as recommended by Agile, but instead of going directly to Production, new functionality goes to a “Staging” environment, enabling thorough functional testing and providing a platform for users to observe the impact of the sprint.
- Addresses concern for quality: The V-Model Test Stages exist for a reason. Agile Methods theoretically drive exceptional Component (and possibly Assembly) Testing but do not take a holistic view of validating functional requirements or integration with upstream and downstream applications. The “Staging” and “Integration” environments enable the execution of Application Product Test and Integration Product Test. Also, normal Product Test documentation would be required and entry/exit criteria would be adhered to entering IPT (but not APT).
- Folks generally advocate limiting the number of mid-pass releases into a test environment to avoid disrupting that test (and injecting quality issues). However, it is assumed that lower-level Testing (i.e., Component and Assembly Testing), through the concept of Test-Driven Design, will enable higher quality code to be delivered to APT which offsets the need for tightly controlled code drops in the test environment.