Tuesday, July 31, 2007

Week 9: SAP Integration Package for SWIFT

SAP INTEGRATION PACKAGE FOR SWIFT: Simplified Corporate-to-Bank communication

Overview:

Global companies often have to maintain a myriad of bank communication interfaces based on different technologies. This results in high total cost of ownership (TCO), a lack of cash-flow transparency, and huge working capital inefficiencies. SAP integration Package for SWIFT helps reduce the TCO of bank communication and establish a multibank-enabled communication channel via the secure and reliable infrastructure of the Society for Worldwide Interbank Financial Telecommunication (SWIFT).

Large international companies can’t rely on a single financial institution to do banking with. Even midsize corporations have to maintain several bank communication channels and interfaces to stay competitive in the global marketplace. The result is a myriad of different bank connections that are based on a variety of technologies. The average costs for such proprietary interfaces often exceed US$50,000 per year per connection.

In addition, these companies fail to achieve seamless integration with their banks via straight-through processing. They must handle payment orders or bank statements separately for each bank. This results in manual processing, productivity losses, excessive exception handling, and an overall lack of cash-flow transparency. Moreover, treasury and cash managers are interested in harmonizing and standardizing their payment infrastructure because of increased compliance pressures and upcoming regulatory changes, especially in the European payment landscape.

SAP Integration Package for SWIFT can help achieve the strategic goals regarding efficient payment processing by providing the following benefits:

  • Tight integration with the SAP NetWeaver platform, the SAP ERP Financials solution, SAP R/3 software (functionality now found in the SAP ERP application), and the SAP Financial Supply Chain Management (SAP FSCM) set of applications
  • Direct access to the network of the Society for Worldwide Interbank Financial Telecommunication (SWIFT), called SWIFTNet, which provides global reach to 7,800 financial institutions
  • Real-time exchange of payment and settlement messages via a secure, trustworthy, and highly available network, using SWIFT’s secure IP network, which provides full redundancy, advanced recovery mechanisms, and first-class operations and customer support services
  • Independence from proprietary country standards and bank specific electronic-banking products
  • Simplified implementation and auditing of statutory payment regulations through a uniform solution worldwide

SWIFT is the financial industry–owned cooperative supplying secure, standardized messaging services and interface software to 7,800 financial institutions in more than 200 countries. SWIFT’s worldwide community includes banks, broker/dealers, and investment managers, as well as their market infrastructures in payments, securities, treasury, and trade. In 2001 SWIFT opened up its network for corporate customers by offering them direct settlements between companies and banks via member administered closed user groups (MA-CUGs). Companies can join one or multiple MA-CUGs through their administering banks and can exchange payment, cash management, and treasury information with their financial service providers. Corporations can access SWIFTNet via various messaging services, including the following services:

  • FIN, which is suitable for single messages (such as a single credit transfer or end-of-day bank statement based on traditional SWIFT MT standards) and is based on the store-and-forward telecommunications technique
  • FileAct, which can transfer bulk messages (such as domestic payment formats or the new XML-based universal financial industry [UNIFI] messages) and supports both real-time and store-and-forward communication.

Besides MA-CUG, SWIFT also offers a many-to-many closed user group for companies traded in regulated stock exchange in Financial Action Task Force member countries. The many-to many closed user group is administered not by banks but by SWIFT. The main benefit of this offering is that corporations need to join only one CUG to exchange messages with all their banks via SWIFTNet. Participation in all these messaging services requires, in addition to SAP Integration Package for SWIFT, other services with costs: a SWIFT membership, SWIFT software, network access, and a virtual private network box that creates an encrypted communication link to SWIFTNet. In addition, the partner bank needs to have the necessary messaging capabilities, and SWIFT charges for ongoing transmission services, depending on the file size.

Technical Architecture:

SAP Integration Package for SWIFT is powered by the SAP NetWeaver platform, which integrates people, information, and business processes across technologies and organizational boundaries. Based on open, standard technology, it works with commonly used technologies such as Java 2 Platform, Enterprise Edition, Microsoft .NET, and IBM WebSphere. SAP NetWeaver is the technical foundation for enterprise service-oriented architecture (enterprise SOA). As such, it increases business process flexibility and supports business-ready solutions, reducing the need for custom integration and decreasing project implementation time.

The SAP NetWeaver platform has advanced business process management functionality, allowing the design, execution, and monitoring of the pay-to-reconcile process across applications and systems. The SAP NetWeaver platform supports the FIN service as part of an enhancement of the file adapter of SAP NetWeaver XI by providing the following file-adapter-specific functionality:

Creation of SWIFT-specific files that are formatted as protocol data units (PDUs) of the automated file transfer (AFT) format and can be processed by the SWIFT infrastructure (SWIFTAlliance Access)

Management of the local authentication mechanism between SAP NetWeaver XI and SWIFTAlliance Access. (Each record of the file containing an MT message is signed using the HMAC-SHA256 algorithm.)

Transfer of files between SAP NetWeaver XI and SWIFTAlliance Access. (The files are made available to SWIFTAlliance Access or SAP NetWeaver XI either by FTP or through a shared folder.)

Interpretation and conversion of SWIFT-specific files that are formatted as PDUs of the AFT format and can be processed directly by SAP ERP Financials

Support of International Payment Standards

SAP Integration Package for SWIFT provides dedicated, prebuilt, out-of-the-box message and process mappings between the application interfaces and business logic of SAP FSCM and SAP ERP Financials and the SWIFT infrastructure. The process-integration content in the package and the mappings covers the entire business process from payment to reconciliation and initially supports FIN and FileAct. It enables you to send and receive both MT- and MX-based message types. The package includes standard mappings within the SAP NetWeaver Exchange Infrastructure (SAP NetWeaver XI) component for FIN messages.

Thursday, July 26, 2007

Week 8: CMMI - Performance Measure System

Many organizations throughout the world have invested in CMMI-based process improvement. Many of these organizations have achieved and sometimes surpassed their improvement goals. The achievement of process capability and maturity level goals is an important benchmark of success; however, it is not enough. Organizations undertake process improvement to achieve business-related performance goals.

There now is ample evidence that process improvement using the CMMI Product Suite can result in marked improvements in schedule and cost performance, product quality, return on investment, and other measures of performance outcome. This document summarizes much of the publicly available empirical evidence about the performance results that can occur as a consequence of CMMI-based process improvement.

Organizations differ in their business goals and strategic objectives as well as the products and services that they provide. They differ in how they implement CMMI-based process improvement and in the ways they measure their resulting progress and performance.

The performance measures are classified into six broad categories for this report. As shown in the detail in Figure 1, potential benefits of process improvement might accrue with respect to cost, schedule, productivity, quality, and customer satisfaction. The sixth category is return on investment and related measures, as shown in the bottom box. Improvements in the six categories can contribute to additional business goals, for example, market share, reduced time to market, lower cost products, and higher quality products.

The SEI's Capability Maturity Model Integration (CMMI®) helps organizations increase the maturity of their processes to improve long-term business performance. Results show that CMMI often leads to very impressive improvements in product quality, project performance and organizational performance.

The CMMI Measurement and Analysis (M&A) process area, which spans both software measurement and process improvement practices, enables and promotes decision-making using data analysis that is based on objective measurement. Measurement provides information that improves decision making in time to affect the business or mission outcome.

Manual effort spent laboriously gathering and shifting through raw data, performing analysis by hand, creating and distributing graphs and reports is time and money wasted.

We can achieve solutions to match the specific and growing needs of companies operating at all maturity and capability levels of the CMMI, from Level 2 through to Level 5.

It is not only easy to implement the CMMI measurement and analysis practices, but can we can track our project, program or company's compliance against the CMMI key process areas.

Case study : CMMI in a Microteam


Monday, July 16, 2007

Week 7 : Interorganizational systems

HOW DELL IS USING WEB SERVICES TO IMPROVE ITS SUPPLY CHAIN

THE PROBLEM

Dell Inc. (dell.com) has many assembly plants. In these plants, located in various countries and locations, Dell makes PCs, servers, printers, and other computer hardware. The assembly plants rely on third-party logistics companies (3PLs), called “vendor-managed hubs,” whose mission is to collect and maintain inventory of components from all of Dell’s component manufacturers (suppliers).

In the past Dell submitted a weekly demand schedule to the 3PLs, who prepared shipments of specific components to the plants based on expected demand. Components management is critical to Dell’s success for various reasons: Components become obsolete quickly, and their prices are constantly declining (by an average of 0.6 percent a week). So the fewer components a company keeps in inventory, the lower its costs. In addition, lack of components prevents Dell from delivering its build-to-order computers on time. Finally, the costs of components make up about 70 percent of a computer’s cost, so managing components cost can have a major impact on the bottom line. Because it is expensive to carry, maintain, and handle inventories, it is tempting to reduce inventory levels as low as possible. However, some inventories are necessary, both at the assembly plants and at the 3PLs’ premises. Without such inventories Dell cannot meet its “five-day ship to target” goal (computer must be on a shipper’s truck no later than five days after an order is received).

To minimize inventories, it is necessary to have considerable collaboration and coordination among all parties of the supply chain. For Dell, the supply chain includes hundreds of suppliers that are in many remote countries, speak different languages, and use different hardware and software platforms. Many have incompatible information systems that do not “talk” to each other.

In the past Dell suppliers operated with 45 days of lead time. (That is, the suppliers had 45 days to ship an order after it was received.) To keep production lines running, Dell had to carry 26 to 30 hours of buffer inventory at the assembly plants, and the 3PLs had to carry 6 to 10 days of inventory. To meet its delivery target, Dell created a 52-week demand forecast that was updated every week as a guide to its suppliers.

All of these inventory items amount to large costs (due to the millions of computers produced annually). Also, the lead time was too long.

THE SOLUTION

Dell started to issue updated manufacturing schedules for each assembly plant, every 2 hours. These schedules reflected the actual orders received during the previous two hours. The schedules list all the required components, and they specify exactly when components need to be delivered, to which plant, and to what location of the plant (building number and exact door dock). These manufacturing schedules are published as Web Services and can be accessed by suppliers via Dell’s extranet. Then, the 3PLs have 90 minutes to pick, pack, and ship the required parts to the assembly plants.

Dell introduced another Web Services system that facilitates checking the reliability of the suppliers’ delivery schedules early enough in the process so that corrective actions can take place. Dell can, if necessary, temporarily change production plans to accommodate delivery difficulties of components.

THE RESULTS

As a result of the new systems, inventory levels at Dell’s assembly plants have been reduced from 30 hours to between 3 and 5 hours. This improvement represents a reduction of about 90 percent in the cost of keeping inventory. The ability to lower inventories also resulted in freeing up floor space that previously was used for storage. This space is now used for additional production lines, increasing factory utilization (capacity) by a third.

The inventory levels at the 3PLs have also been reduced, by 10 to 40 percent, increasing profitability for all. The more effective coordination of supply-chain processes across enterprises has also resulted in cost reduction, more satisfied Dell customers (who get computers as promised), and less obsolescence of components (due to lower inventories). As a result, Dell and its partners have achieved a more accelerated rate of innovations, which provides competitive advantage. Dell’s partners are also happy that the use of Web Services has required only minimal investment in new information systems.

CONCLUSIONS ABOUT THE CASE:

Dell’s success depends in large part on the information systems that connect its manufacturing plants with its suppliers and logistics providers. The construction and operation of interorganizational information systems (IOSs) that serve two or more organizations is the subject of this case study. In today’s economy, such IOSs may also be global. A new information technology, Web Services, has been successfully applied to improve the information systems that connect Dell and its vendors and the vendors and their parts and components manufacturers. To achieve efficient and effective communication of information, companies may also select from technologies such as EDI, XML and extranets which are the other major IOS support technologies.

Wednesday, July 11, 2007

Relationship : SOA and Web 2.0

SOA and Web 2.0 are very closely linked to each other in some ways and are miles apart in some other ways.

The linkage: While SOA is primarily focused on improving business experience, it is true that business are run by human beings and hence improving user experience is an essential part of improving the overall business experience. Web 2.0 is very well positioned to extend the improvement in experience to the presentation layer by take advantage of flexibility of underlying software due to SOA.

SOA looks at improvements in experience from inside out (from software to human) while Web 2.0 looks at improvements from outside in (human to software)

The separation: Not all business applications need user interaction. In fact for core business applications, the less human intervention, the better (more automated business). Businesses are constantly on the look out for cutting costs... including data processing costs. And it costs a lot less to process data when it is done as close to the processor as possible (meaning without human intervention). This will probably remain true for a foreseeable future irrespective of how cheaper the processing power becomes, unless it becomes available absolutely free.

Although this number could be different for some businesses (such as e-businesses – eBay, Amazon), for most other businesses, more than 80% of IT applications form core business applications. Hence Web 2.0 probably has space for contribution in the remaining 20% applications only which are non-core in nature...

Week 6 : Semantic Web

Semantic Web: A Brief Summary

Introduction:

The Web was designed as an information space, with the goal that it should be useful not only for human-human communication, but also that machines would be able to participate and help. One of the major obstacles to this has been the fact that most information on the Web is designed for human consumption, and even if it was derived from a database with well defined meanings (in at least some terms) for its columns, that the structure of the data is not evident to a robot browsing the web. Leaving aside the artificial intelligence problem of training machines to behave like people, the Semantic Web approach instead develops languages for expressing information in a machine processable form. The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation. The first steps in weaving the Semantic Web into the structure of the existing Web are already under way. In the near future, these developments will usher in significant new functionality as machines become much better able to process and "understand" the data that they merely display at present.

Origin and Development:

Semantic Web origins from the premise that the Web is incomplete. It was posed by its inventor, Tim Berners-Lee. Even in his first designs of what the Web should be, there were ideas that did not come into reality in the version of the Web we currently have, which can be called the "Web 1.0". In 1999, in conjunction with other people interested in creating a new web, Berners-Lee engaged a new trial to get a more complete picture of his initial Web dream. This new attempt was called the Semantic Web and has created a new community of research organized around the Semantic Web Interest Group at the World Wide Web Consortium. The "Semantic Web", a term coined by Tim Berners-Lee, is used to denote the next evolution step of the Web. Associating meaning with content or establishing a layer of machine understandable data would allow automated agents, sophisticated search engines and interoperable services, will enable higher degree of automation and more intelligent applications. The ultimate goal of the Semantic Web is to allow machines the sharing and exploitation of knowledge in the Web way, i.e. without central authority, with few basic rules, in a scalable, adaptable, extensible manner. With RDF as the basic platform for the Semantic Web, a multitude of tools, methods and systems have just appeared on the horizon. To give a brief introduction about RDF, its defined as a general framework for describing a Web site's metadata, or the information about the information on the site. It provides interoperability between applications that exchange machine-understandable information on the Web. RDF details information such as a site's sitemap, the dates of when updates were made, keywords that search engines look for and the Web page's intellectual property rights.

In simple terms Semantic web can be defined as a Web that includes documents, or portions of documents, describing explicit relationships between things and containing semantic information intended for automated processing by our machines. Two important technologies for developing the Semantic Web are already in place: eXtensible Markup Language (XML) and the Resource Description Framework (RDF).

Semantic Web Architecture and Applications:

Semantic Web architecture and applications are the next generation in information architecture. The previous ideas and principles to complete the Web are being put into practice under the guidance of the World Wide Web Consortium. To reduce the amount of standardization required and increase reuse, the Semantic Web technologies have been arranged into a layer cake as shown in the Figure below. The two base layers are inherited from the previous Web. The rest of the layers try to build the Semantic Web. The top one adds trust to complete a Semantic Web of trust. The Semantic Web layers are arranged following an increasing level of complexity from bottom to top. Higher layers functionality depends on lower ones. This design approach facilitates scalability and encourages using the simpler tools for the purpose at hand. All the layers are detailed in the next subsections.

The current architecture for the Semantic Web is mainly split into three layers:

From lowest to highest:

  1. Resource Description Framework (RDF): lets you assert facts
    e.g. person X is named "Drew".
  2. RDF Schema: lets you describe vocabularies and use them to describe things
    e.g. person X is a LivingPerson.
  3. Web Ontology Language (OWL): lets you describe relationships between vocabularies
    e.g. persons in schema A are the same thing as users in schema B

The Semantic Web has been developing a layered architecture, which is often represented using a diagram first proposed by Tim Berners-Lee, with many variations since.

While necessarily a simplification which has to be used with some caution, it nevertheless gives a reasonable conceptualisation of the various components of the Semantic Web. We describe briefly these layers.

Unicode and URI: Unicode, the standard for computer character representation, and URIs, the standard for identifying and locating resources (such as pages on the Web), provide a baseline for representing characters used in most of the languages in the world, and for identifying resources.

XML: XML and its related standards, such as Namespaces, and Schemas, form a common means for structuring data on the Web but without communicating the meaning of the data. These are well established within the Web already.

Resource Description Framework: RDF is the first layer of the Semantic Web proper. RDF is a simple metadata representation framework, using URIs to identify Web-based resources and a graph model for describing relationships between resources. Several syntactic representations are available, including a standard XML format.

RDF Schema: a simple type modelling language for describing classes of resources and properties between them in the basic RDF model. It provides a simple reasoning framework for inferring types of resources.

Ontologies: a richer language for providing more complex constraints on the types of resources and their properties.

Logic and Proof: an (automatic) reasoning system provided on top of the ontology structure to make new inferences. Thus, using such a system, a software agent can make deductions as to whether a particular resource satisfies its requirements (and vice versa).

Trust: The final layer of the stack addresses issues of trust that the Semantic Web can support. This component has not progressed far beyond a vision of allowing people to ask questions of the trustworthiness of the information on the Web, in order to provide an assurance of its quality.

Future of Semantic Web:

The Semantic Web has great potential. However, it has been a long time in the development and does require an investment of time, expertise and resources. Nevertheless, the time does seem right to start to think how best to use the simpler applications of the technology. Although Semantic Web applications are very new, I believe we are at the beginning of the next generation of the internet and you’ll see some interesting services popping up in the near future. Companies are heavily investing in semantic web technologies. Adobe, for example, is reorganizing its software metadata around RDF, and they are using Web ontology-level power for managing documents. Because of this change, "the information in PDF files can be understood by other software even if the software doesn't know what a PDF document is or how to display it." In its recent creation of the Institute of Search and Text Analysis in California, IBM is making significant investments in semantic web research. Other companies, such as Germany's Ontoprise, are making a business out of ontologies, creating tools for knowledge modeling, knowledge retrieval, and knowledge integration. The building blocks are here, semantic web-supporting technologies and programs are being developed, and companies are investing more money into bringing their organizations to the level where they can utilize these technologies for competitive and monetary advantage.

Monday, July 2, 2007

Week 5 Write Up : ERP (Traditional Integration Technology)

Traditional Integration Technology: ERP

Let me start this writeup with a good saying –

History teaches everything, including the future.
Lamartine

Introduction

The names for new technology systems continue to change, but the promises they make remain the same: improve the bottom line. This is the first installment of Back to the Basics, an intermittent series that will unearth the core definitions of buzzwords and key application systems, and chart their evolution. Understanding their evolution is essential to knowing their current use, future developments, and upcoming trends—and more importantly, for making informed decisions.

Definition:-

Enterprise resource planning: An accounting-oriented information system for identifying and planning the enterprise-wide resources needed to take, make, ship, and account for customer orders…

—from the APICS Dictionary, 10th edition

Enterprise resource planning (ERP) systems started as a means for inventory control and grew to replace islands of information by integrating traditional management functions, such as financials, payroll, and human resources, with other functions including manufacturing and distribution. Currently, the complexity of business is creating new user needs; the growth of computers is developing new potential; the quest for new markets by vendors has given users a new voice; and ERP is evolving once again. Names and acronyms like extended-ERP, ERP II, enterprise business applications (EBA), enterprise commerce management (ECM), and comprehensive enterprise applications (CEA) are being tossed about, but what's really going on?

Evolution

In the 1960s, the key goal of an ERP system was inventory control. Manufacturers assumed consumers would continue their buying patterns and aimed to keep enough inventory on hand to meet demand. The sophistication of resource planning grew with the affordability and feasibility of the computer. In the sixties, computers were large, hot, noisy machines that occupied entire rooms, but by the seventies, average manufacturing companies could finally afford them. The innovation computers allowed caused management to review traditional product cycles and resource allocation. Materials requirement planning (MRP) computer systems were developed to promote having the right amount of materials when needed. First developed by IBM and J I Case, a US tractor maker, MRP promised to automatically plan, build, and purchase requirements based on the finished products, the current and allocated inventory, and expected arrivals. Master production schedule (MPS) was built to monitor the finished goods. Naturally data from MPS fed into the MRP, which contained phased, net requirements for planning the procurement of sub-assembly components, raw materials, and ingredients.

MRP gave planners more control, allowing them to be proactive and use time phased orders, rather than reacting only when delays occurred. However, because of the limitations of computers at the time, the software could handle only limited variables. There was no way to see how a late part, for example, would impact overall production. The general assumption was that delays in the system would mean the customer would receive the product late. Also, backward scheduling, where the start date was calculated backwards from the desired completion date, had to be employed to minimize inventory and still meet the customer's delivery date.

Determining the quantity of parts needed to complete the order, however, was not enough. Companies needed to create capacity plans based on materials, equipment, and priorities to improve efficiency. Thus capacity requirements planning (CPR) emerged. Unfortunately, again due to the limited capabilities of computers, variables such as idle time, maintenance, and labor could not be fitted into the CPR equation. Thus each work center was assumed to have an infinite capacity—a problem that still plagues manufacturers today. Scheduling and planning still remained imprecise. As a result, the need to factor in other resources became apparent.

This need moved beyond the shop floor. Keeping financial tabs on the coming and going of inventory, the labor and overhead involved, and the revenue generated from the delivery was also necessary. Manufacturing resource planning (MRPII) attempted to integrate business planning, sales, support, and other functions together so they could work in concert.

By the nineties, each functional area also saw the benefits of computerized tracking and planning. With computers being more common and affordable and programming more sophisticated, each department could use its own software program. Unfortunately, that was the problem. Disparate systems and different databases were not linked and the need for integration became obvious. Moreover, the time to market for consumer goods decreased sharply because of consumer demand. This combined with new, Japanese manufacturing philosophies, meant that western enterprises had to re-evaluate their manufacturing processes. Just in time (JIT), which aimed to eliminate waste and material lag time, meant that suppliers and manufacturers had to develop closer relationships. Also, labor exploitation caused cost of goods sold (COGS) to shift to purchase materials. Planners needed to know the cost of material allocations immediately after orders were placed, but buyers purchasing raw materials need to know the sales plan months in advance. A common database had to be developed: enterprise resource planning was born.

Before the birth of ERP

Prior to the concept of ERP systems, departments within an organization (for example, the Human Resources (HR) department, the Payroll (PR) department, and the Financials department) would have their own computer systems. The HR computer system (Often called HRMS or HRIS) would typically contain information on the department, reporting structure, and personal details of employees. The PR department would typically calculate and store paycheck information. The Financials department would typically store financial transactions for the organization. Each system would have to rely on a set of common data to communicate with each other. For the HRIS to send salary information to the PR system, an employee number would need to be assigned and remain static between the two systems to accurately identify an employee. The Financials system was not interested in the employee level data, but only the payouts made by the PR systems, such as the Tax payments to various authorities, payments for employee benefits to providers, and so on. This provided complications. For instance, a person could not be paid in the Payroll system without an employee number.

After

ERP software, among other things, combined the data of formerly separate applications. This made the worry of keeping numbers in synchronization across multiple systems disappear. It standardized and reduced the number of software specialities required within larger organizations.

Advantages of ERP:-

  • Ease of use – The ERP System is very user friendly and with the right amount of training, it is easy for the employees to use the system.
  • Introduces business best practices – It helps companies to do away with the incorrect ways of carrying out the different business functions and introduces business best practices. This helps to provide greater control and introduces standardized ways to perform business processes.
  • Ready-made solutions for most of the problems – As the vendors who develop ERP software packages, take the best ideas from all their customers and incorporate them into their products, they develop systems that help resolve most of the problems
  • Easy enterprise wide information sharing – Once the information is entered into the single database, everyone in the organization has access to the information and sees the same computer screen.

Disadvantages of ERP:-

  • Costs – The costs involved in setting up an ERP System are huge. Hence it is very important for companies to first figure out whether their ways of doing business will fit within a standard ERP package. Software cost is not the only expense for the company. They have to consider costs involved in training, data conversion, integration and testing, post-ERP performance issues that may lead to reduction in revenues, maintenance costs etc.
  • Time – The implementation of an ERP system takes a long time. Since time is a valuable resource for the organization, it is important to make accurate estimates of the time required. There is also a chance that the implementation process may slow down the routine operating works within organizations. (ERP)
  • Acceptance – Employees are never ready to accept change. They don’t really want to change the way they perform their daily tasks and love to stick with the same old ways to carry out their work. Hence it is important for organizations to involve the users in the project activities from the beginning. This will create a sense of ownership in their minds and make them accept the ERP System more willingly. (Shields, 2001)
  • Training – Many a time companies underestimate the amount of time employees will require getting familiar with new systems. They tend to budget less time and costs for training, that could lead to improper training of employees or actual costs exceeding the budget by a large number.

Current and future developments in ERP

Currently, the goal of integrated ERP is to replace islands of information with cross-communication to ensure enterprise-wide coherency. Though ERP promises quick access to information, it is still plagued with problems it inherited from MRPII: assumptions of infinite capacity, and inflexible scheduling dates. However, ERP can be purchased as a product. Vendors now offer broad functional coverage nearing best-of-breed capabilities; vertical industry extensions; and strong technical architectures. This, combined with product enhancements, global support, and technology partners, is narrowing the gap between desired and actual features.

Traditionally the biggest purchasers of ERP solutions have been large Fortune 100 companies, however, the surge of IT investments in the nineties dropped in 1998. It continued to fall until 2000, and has not yet reached the same numbers. As a result, vendors are now looking to increase their market share by meeting the needs of small and medium businesses. However, entering a new market is not enough to build a strong repertoire. What will truly differentiate the leaders in this industry is the breadth, depth, and diversity offered at the plant level, and the ability to meet the requirements of distribution centers. Furthermore, planning functionality will have to extend from the shop floor to distribution centers. This includes flow-based manufacturing, work instruction, dynamic dispatching, and other elements. Web-based, service oriented architecture will also have to be factored in. New systems will also have to be more customer-focused, incorporating e-commerce interaction and collaboration with business partners.

ERP "extension" software is also in demand. Users want comprehensive functionality from advanced planning and scheduling (APS), manufacturing execution systems (MES), to sales force automation (SFA). As a result, broader customer relationship, business intelligence (BI), business-to-business (B2B) and business-to-commerce (B2C) functionalities are being included. These features need to be integrated, and ideally, "one-stop-shop" offerings should synchronize and integrate releases.

To meet the integration needs of users, all major, traditional ERP players have begun moving into the areas of supply chain management (SCM) and customer relationship management (CRM). For example, in 2004 it was reported that SAP's SCM revenue outpaced industry leaders i2 Technologies, Ariba, and Manugistics. Incorporating SCM functionality may be a way to circumvent MRP II's capacity planning limitations. Furthermore, APS, a subset of supply chain planning (SCP), allows users to create a feasible schedule using identified, finite constraints. Finite capacity creates simulations and allows the user to analyze the results prior to committing to the action. SCM also addresses the need for enhanced information flow among customers, suppliers, and business partners outside of the enterprise. The concept of global logistics was created by combining APS with specialized warehouse and transportation management solutions. Thus the global supply management chain linked suppliers and user companies and encompassed all processes, including initial raw materials, to the consumption of finished goods. Yet, while SCM and its offshoots promise to improve some of the deficits in ERP, it will not be a replacement. No matter how responsive a supply chain execution (SCE) system is, it still functions on the premise of waiting for a problem to occur, then acting on it. This is just as flawed as relying on unyielding plans and never obtaining feedback. They are both needed for an enterprise to be productive.

Product lifecycle management (PLM) too may seem to be a rival system to ERP, perhaps more so than SCM. PLM solutions are oriented around creative product innovation processes, whereas ERP is transaction oriented. Furthermore, PLM stand-alone packages accommodate collaboration better than ERP. However, the market is up for grabs and PLM vendors need to focus on easy integration with ERP in order to stay competitive. Likewise, if ERP vendors continue to develop extended functionality, collaborative capabilities, accessibility, and integration by incorporating universal interfaces and Web services standards, then PLM's current market superiority will be noticeably diminish.

The Future of ERP

Originally, the predictions said that the enterprise resource planning market would quickly end as soon as the year 2000 arrived. This is now proving not to be the case. A recent survey of 50 information technology executives by Forrester Research Inc. has shown that year 2000 fixes did not even make the list of the top ERP incentives. New reports show that the year 2000 impact on ERP sales was overstated and the prediction is that annual compound growth rates for the next five years will be 37%. ERP solutions are now being purchased by major firms all over the world to streamline their business processes. The new ERP solutions are reaching beyond the traditional industry of manufacturing and are now being tailored to adapt to industries such as retail, healthcare, utilities, and telecommunications. The benefits of having everyone in the organization working off of one common data, with the same pricing, product information allows companies to be more responsive to customers. The ERP industry is expected to grow from its current level of $11 billion to be a $52 billion market in the next five years.

ERP Success Stories

The benefits of Enterprise systems have led to many gains in productivity and speed of processes. A few examples of ERP working to a companys advantage are seen through IBM Storage Systems division, Autodesk and Fujitsu Microelectronics. IBM decreased the time it took to reprice all of its products from 5 days to 5 minutes, and the time to ship replacement parts from 22 days to 3 days. IBM can also now do a complete credit check in 3 seconds, down from the previous 20 minutes. Fujitsu was able to reduce cycle time for order fufillment from 18 days to a day and a half. Also, Autodesk a leading designer of CAD software now ships out orders with in 24 hours. Before they installed ERP it took them an average of two weeks to get the order out.

Case Study

Elf Atochem North America, a two billion-dollar chemical company, used enterprise systems as part of the corporate and organizational strategy. In the early 1990's Elf had multiple critical information systems throughout their 12 business units. Systems were not integrated. Each business unit tracked and reported data seperately. This resulted in a lack of information flowing through the organization. Top managers did not have the information necessary to make critical decisions.

The company decided to implement SAP R/3 product in a client server environment. This implementation was not labeled in the company as an Information Technology project, but it was looked at as a strategy initiative. The company looked at the organization as a whole and noticed that each business unit ran in different ways making it difficult for customers to understand and do business with.

Before the ERP system was put into place it took Elf Atochem four days to process an order. The company was also not able to coordinate manufacturing and inventory. As a result the company wrote off g reater than 60 million dollars a year in losses. Sales representatives were never able to guarantee order and delivery dates, resulting in lost customers.

The ultimate goal of the company was to turn from an industry slacker into a leader. To do this they needed to provide better customer service, an area they greatly lacked in. Elf Atochem decided they needed to focus their efforts on redesigning four key processes to improve their business. The processes they concentrated on were production planning, order management, financials and materials management. When implementing the R/3 product they chose to only purchase the modules that directly supported the four key areas. The company did not implement modules such as Human Resources or Plant Maintenance. Elf Atochem made many changes to the organizations fundamental structures in coordination with the ERP implementation.

The ERP system integrated all of the company's financial systems. It also enabled the company to integrate all orders and invoicing throughout all 12 business units. One of the most important things that ERP gave to the company was the real-time information that was necessary to connect sales and production.

Elf Atochem's implementation is now more than 75% complete. The rollout of SAP is ahead of schedule and under budget, a true rarity in this industry. This success is largely due to the management techniques of the implementation team. The team is comprised of over 60 employees with different areas of expertise. They are installing the system one business unit at a time which ensures that the project is under control and manageable.

So far, Elf Atochem's results have been great. They have seen improvements in customer satisfaction levels and the time it takes to process orders has greatly been reduced. The company is now operating more efficiently and inventory, receivables and labor expenses has all been cut. The company is estimating the ERP system, once complete will save them tens of millions of dollars.

ERP Software packages:


















1. webERP: webERP is
an open source ERP system for Small and Medium-sized Enterprise. webERP is integrated software capable of simultaneously performing many functions. Each process uses the latest real-time information after it has been updated by other processes in other parts of the business. Continuous online availability of the current status of every area of the business is a critical advantage in today's fast moving business environment. The webERP machine comes fully equipped with all the attachments required to process multi-currency accounts receivable, multi-location inventory, multi-currency accounts payable, as well as bank accounts and general accounting.

The webERP makes use entirely of Internet technologies to allow for massive scalability. It requires only a minimal bandwidth network connection, a web browser and Acrobat reader. The client software is already familiar to most users. An accounting system designed from the ground up to be easily run as a web application using the open source PHP scripting language.

For more information, visit www.weberp.org

2. SAP R/3: SAP R/3 is the former name of the main ERP software produced by SAP. Its new name is mySAP ERP. SAP R/2 was a mainframe based business application software suite that was very successful in the 1980s and early 1990s. It was particularly popular with large multinational European companies who required soft-real-time business applications, with multi-currency and multi-language capabilities built in. With the advent of distributed client-server computing SAP AG brought out a client-server version of the software called SAP R/3 that was manageable on multiple platforms and operating systems, such as Windows or UNIX since 1999, which opened up SAP to a whole new customer base. SAP R/3 was officially launched on 6 July 1992. SAP came to dominate the large business applications market over the next 10 years.

SAP R/3 is a client/server based application, utilizing a 3-tiered model. A presentation layer, or client, interfaces with the user. The application layer houses all the business-specific logic, and the database layer records and stores all the information about the system, including transactional and configuration data

Conclusion

To implement ERP Systems is not easy, since ERP is not for everyone. Especially, the small and medium size organizations may have to take extra precaution about the cost of implementation, training, and maintenance. Organizations need to make the right decision from the beginning, since technically after implementing ERP it is nearly impossible to migrate to another platform due to tremendous investment cost.

Many companies are now using the Internet for their business. One of the major challenges for the ERP system in the future will be the need to accommodate the e-business requirements of companies that are growing everyday.




Saturday, June 23, 2007

Enterprise Integration Patterns

Patterns : -

People think in patterns. It is the way we naturally communicate ideas related to complex subject areas such as music, science, medicine, chess, and software design. Patterns are not new. We all use them intuitively as part of the learning process without really thinking about it. And because our minds naturally use patterns to perform complex tasks, you can find patterns nearly everywhere.

Patterns in Sports
Consider what happens during a soccer game or an American football game. Individuals who are acting according to predetermined patterns move quickly and decisively against targeted opponents. Each individual's pattern of movement is also part of a larger pattern of orchestration where each player has clear responsibilities and scope. In addition, the entire team is in a binary state—either offense or defense. Without patterns in sports, the games would not be as rich and interesting. Can you image how long the huddle would be in an American football game without the language of plays (patterns)?

Note: Software patterns are significantly more complex than these simple examples. These examples are intended to make the notion of software patterns more approachable at the expense of being less technically rigorous.

If you look closer at patterns, you will find relationships between them. In sports, for example, teams have certain plays for offense and certain plays for defense; the patterns that describe two players' actions must fit into a larger pattern that the team is following. In this sense, patterns can be described in terms of hierarchies.

Patterns in Music
Another example of how people think in patterns is the patterns found in music, such as rock and roll. In rock and roll, a rhythm guitar player usually repeats a pattern of chords in a specific key. Against this backdrop, a lead guitarist plays a freeform series of notes from a candidate pattern of notes that correspond to the chord progression being played.

Patterns: Templates and Systems
The reuse of expert knowledge and experiences is of high interest in all areas of human work. Major advantages are savings in time and cost and improved quality of problem solutions. A possibility to make expert experiences explicit and reusable are so called "patterns". The basic idea of patterns comes from the architect Christopher Alexander, which captured and reused design experiences in architecture and civil engineering by using a pattern language. The community of object-oriented software development adapted this idea for capturing software design experience in so called "design patterns".
We adapt the pattern idea of the before-mentioned authors to capture experiences in the area of enterprise integration. To describe a pattern, we use a pattern template consisting of five elements:

• Name: the name of the integration pattern.
• Context: a description of the situation an integration problem occurs.
• Problem: a description of the integration problem which has to be solved.
• Solution: a set of guidelines, instructions and rules to solve the problem.
• Example: a description of a typical application of the integration pattern.

A set of patterns describing a family of problem solutions for a given domain is called a pattern system. A pattern system consists of pattern descriptions and relationships between the patterns, describing their interdependencies.

A Pattern System for Metamodel Enterprise Integration Patterns: -
To define metamodel - sometimes when modeling a set of related systems, usually belonging to a given domain, we realize that these models share many constructs. We are then able to generalize across these different models and come up with a model of what the set of related models should conform to. This is what we call a 'metamodel', a model of models. As a matter of fact the term metamodel is still quite controversial and a matter of discussion. The pattern system for metamodel integration patterns consists of six patterns. These patterns are classified into loose integration patterns, intermediate integration patterns, and strong
integration patterns. Loose integration patterns are used for metamodels which are largely complementary. Each of the metamodels can exist without depending on the existence of the other metamodel. Intermediate integration patterns are used to reuse parts of a source metamodel to build a new metamodel or to aggregate parts into an existing one. Some parts of the new metamodel can exist with-out depending on the source metamodel, other parts can not exist independently. Strong integration patterns are used to build a new metamodel completely depending on a source metamodel. The new metamodel cannot exist independently of the source metamodel.