|Brought to you by ControlGlobal.com and Putman Media
||May 9, 2007|
Headlines from Today's Activities
Winning the Red Queen’s Race
“Our vision,” he said, “is to be the SAP for operations management: the key company to help you reach your operational excellence goals by helping manage all operational data. We offer a common technology infrastructure, with all of our products on this common infrastructure, with a common metadata model.”
The Red Queen’s Race
“We are in a Red Queen’s Race,” he said. “Like Into the Looking Glass. It takes all the running you can do to stay in place. If you want to get anywhere you need to run twice as fast. That’s a great metaphor for what our situation is, in business globally. Markets are wide open, and input costs are global. Instead of competing with just two or three companies, a couple of whom are not as smart as you, you are competing globally with some very smart people in the global marketplace for all the dollars.”
“You are also competing in the global knowledge person market, too,” He noted. “Finding good people is getting harder to do, and will continue to be hard as people retire.”
He described Matrikon’s market verticals: Oil and Gas, Mining, Power, Refining and Chemicals, Pulp and Paper. “They all have the same goals,” he said, “Operational Excellence…they all have somewhat different vocabularies, but the goals are the same.” Wryly he concluded, “They want to lower cost, raise profits, and make sure that the CEO doesn’t go to jail.”
Matrikon’s background, our strength, Shook said, is in the process industries. “That means we can take innovations from one industry and leverage them throughout the entire group of process verticals we cover.” But, he noted, if you are looking for advances, you should speak to people in industries other than your own, “because some of them are further ahead than you are.”
The first level down in Operational Excellence is about filling the supply chain, “goo in a pipe.” We call that Production Management. Then there is Asset Effectiveness: the amount you get out of your fixed assets. It is also about the people who run the processes: Operational Optimization.
The problem is that excellence is the consequence of continuous attention to detail and stamina. Mediocrity is easy, excellence is hard.
“One of the things that make the process industries unique,” Shook said, “is that we rely on instrumentation to tell us what went where, and how. We don’t actually deal with the product, so much as we deal with the information about the product. The process industries were some of the first information industries.”
But, he went on, we have all this data, whether manually entered or acquired by a DAQ system, and this data goes upward to the people who have the pieces of the goals and objectives. And the data doesn’t work. The data always has discrepancies, and it takes a long time to get data you can do something with.
“You cannot manage what you cannot measure, and you cannot use measurement if you can’t measure quickly and easily,” Shook said. “There is a gap.”
The gap exists because there are problems with timely access to complete reliable information about operations, automated transfer of data among operational systems, and self-maintaining applications are few and hard to find.
“If you can tell,” Shook said, “during a shift, that there is a problem, you can fix it. If you only find out in the monthly rollup, you can’t do anything to solve the problem.”
Why is Matrikon like a DC-3?
Shook explained the metaphor of the DC-3. “The first enabling technology we have is Universal Connectivity, with Matrikon OPC Server,” he said. Our OPC suite is the universal connection to all the data in the plant.
“Then we have another enabling technology, our Asset Centric Plant Model, and a third, our Calculation Engine. This means that we can make real products and make them work across all the process industries.”
Then we have the Universal Visualization products.
Shook went on, “Now here’s the bit that makes it economical to tie in these applications and put them in place: Multilingual Web Service Interface. This is the “flaps” that the DC3 had, and that the Boeing 247 did not– the piece that completes the economic viability puzzle.”
The Technology Adoption Life Cycle
“We also call these people ‘laggards,’” Shook quipped.
The innovators and early adopters are driven by a vision. The pragmatists and skeptics are waiting for others to succeed, and the traditionalists don’t want anything to change.
“The focus of the early adopters and innovators is technology. The pragmatists and skeptics are focused on the business problem, and the traditionalists are, well, traditional.”
The innovators and early adopters want (or will accept) infrastructure innovation. They are looking for the enabling technology. The pragmatists and skeptics don’t care what’s under the hood, they just want something that will solve their problems. The traditionalists will take on technology change when they are forced to.
At Matrikon, up to last year, Shook said, our communication was almost entirely focused on early adopters and innovators. “Because that’s what WE are: innovators,” he said. “We are in love with new technology. ‘Look at how cool!’”
Shook continued, “The rest of you want to know if it is going to make money, if it is going to make your life easier, and if it has been done in more than one place, successfully. We need to provide you business case, case studies, statistics.”
As far as the traditionalists are concerned, well, folks, we aren’t interested in pushing a rope.
Having spent the last year learning about YOU, we have now made some changes in US.
We are focusing on the value proposition, not the technology. We are trying to make our products easier to understand and easier to use. Product names now explain what the products do, and products are organized by the aspect of the operational excellence challenge they address. We have an enhanced and extended services offering– particularly in alarm management and new vertical applications built on horizontal products.
Shook made sure we understood that Matrikon is not giving up their technical leadership: Tai-ji PID, NetCalc, NetKPI, and the quality focus have not changed and will not change.
Matrikon Suite bridges the gap
The products themselves:
“Matrikon Suite,” Shook concluded, “is enabling technology for plant optimization.”
Matrikon's Dave Shook explains the Red Queen's Race.
|Matrikon Alarm Manager — The Definitive Alarm Management System
- Eliminates nuisance alarms
- Collects alarms & events across your enterprise
- Enables fast & thorough incident reviews
- Analyzes, manages, & monitors alarm systems
- Helps monitor operations and operator work load
Build your own Alarm Management Philosophy Document... Right Now!
BA Energy Builds Oil Sands Upgrader with Matrikon Suite
This has been BA Energy’s challenge as it works to build its first Heartland Upgrader oil sands upgrading facility in Fort Saskatchewan, Alberta. BA Energy is owned by Value Creation Inc. (VCI), which developed its new upgrading process that has capital and energy costs that are reportedly half as much as traditional oil sands upgrading expenses. That’s fortunate because VCI also holds 365 square miles of oil sands leases, estimated to hold up to 8 billion barrels of recoverable bitumen.
To commercialize VCI’s process, BA Energy is now building its new plant outside of Edmonton. The Heartland Upgrader Project (HUP) is scheduled to start operating in the last half of 2008, and will be the first of six upgraders BA Ebergy has planned for the Edmonton area. This first upgrader is presently permitted to produce 260,000 barrels per day through 2012.
“We have two basic tasks. To build a bitumen heavy-oil upgrader and build a new company,” says Kevin Melnyk, general manager at BA Energy’s Heartland Upgrader, on May 8 at Matrikon Summit 2007.
To start tackling these goals, BA Energy first organized an Operations Resource Planning Committee to define the new firm’s business processes and also those that would be used at the upgrader plant. Defining these processes, in turn, helped BA Energy begin to design its new systems and select the best tools to support them. Melnyk says the committee and BA’s other organizers tied all the applicable stakeholders together, so the new firm could have proactive, real-time decision making, and convert its internal and external interactions, so its applications could be scalable and better fits its overall business pusposes.
“This isn’t ERP,” says Melnyk. “In fact, I have a nightmare in which I wake up, and find that SAP has developed a DCS.”
As a result, BA Energy enlisted Matrikon to help with a variety of functions and projects on which the Heartland Upgrader is going to be founded. These functions include alarm management and labs information management, and the projects include integrating with the firm’s other systems, including Exaquantum DCS, Maximo, and SAP. Matrikon provides functional elements that integrate the facility’s information architecture to provide a scalable, asset-based application suite that supports the needs and challenges of the business.
Three or four months into its HUP project, Melnyk adds that BA Energy was identifying the risk and probability of failure of each component and system it was considering, so it could also develop maintenance, inventory purchasing, and reliability inspection plans.
Kevin Melnyk is creating a company and a control system at the same time.
On Track with Innovation
In Designing the Operator Interface for Effective Alarm Management, Errington described what he called the paradox of automation.
“Better automation leads to more sophisticated processes, he said, and more sophisticated processes lead to more opportunities for error. We ‘fix’ the increasing errors with still more automation. Consequently, when things go wrong, people have difficulty intervening to correct the problem.”
When we do alarm management, he noted, we need to address the operator interface because alarm configuration rationalization and alarm performance monitoring practices can only improve operator effectiveness to a point. However these are necessary steps to improve alarm system impacts on operator performance. Best practices will not completely eliminate the occurrence of alarm floods. Operator interface design completes the loop, he said.
We need to design for perceptual processing, he went on. Operator interface design determines the extent that specific human information processing capabilities are used to perform work tasks. Designing for perceptual rather than cognitive processing imroves the efficiency and effectiveness of human performance.
“Here’s how sensing works: we have a sensory buffer (iconic memory) that holds a fixed detailed image of the world though all the senses. It is extremely brief—10 msec, but its capacity is unlimited,” he said. “Selective attention can influence what we perceive out of the sensory buffer. Recoding is based on inate primitives and learned patterns—actual patterns we respond to often unavailable to conscious awareness.”
Typically, tasks that can be performed on large multi-element displays in less than 200 to 250 msec are considered preattentive. Preattentive properties include high intensity, movement, and emotive. Bright, loud, smelly, moving, and emotional are examples.
Categories of visual coding that influence processing at the preattentive level are color position, form, and motion. Effective use of those attributes enables quick processing of some visual elements relative to others: salience. Key design techniques, then, include the use of distinctive exclusive color coding to draw attention to alarms and off normal conditions. Use a redundant visual codings cheme to address color perception deficiencies. Use motion to draw attention to unacknowledged alarms, not to be cool. Provide on-screen navigation links associated with specific process equipment areas to focus constant attention on the alarm issues.
Errington reported on a case study by the Abnormal Situation Management Consortium and Nova Chemicals which concluded that an operator interface designed to the ASM specifications enabled operators to respond to conditions before they went into alarm 48% of the time.
Diggin’ That Tar
Hrycay described the basic oil sands extraction challenges:
We are starting to do a lot of SCADA, he said, but lots of people still “drive around with laptops in trucks.”
We started the NSERC/Suncor/Matrikon/iCore Industrial Research Chair in Computer Process Control in order to benefit from some research in a partnership with the University of Alberta and the Canadian government. Announced October 5, 2006, it is a five year partnership between the three companies.
Controllers, Alarms and Instrument Monitoring System (CAIMS) started in May 2006, with its goals being alarm reduction and critical instrument monitoring.
The alarm performance by month is trending down, but still are very high since the project is only six months old. Overall interventions are trending down as well. Daily alarms at the FTPH plant have gone from 8000 per day to less than 100 per day in six months. Of 749 loops, 361 were not in service, and now most are back in service. Bad actors are identified almost immediately.
Hrycay reported that the key benefits from the project included alarm reduction, detection of several malfunctioning instruments, identification of key constraints and protential bottlenecks; and loop tuning is currently in progress based on findings.
More steps will be undertaken to continue to derive value from the partnership, Hrycay said, because the dollars are real.
Another problem the consortium looked at is separation cell interface level control: a traditionally hard measurement. Instead of an inline instrument, they used a digital camera sending images of the sight glass, and a graduate student designed a soft sensor. Now, the digital camera allows the operator to put the process in auto, using an image based real time soft sensor. We took it into ProcessAct (Advanced Control Toolbox) and it is now running and producing a very large payback as a simple single loop controller running the underflow VFD pumps on the sep cell.
“This slide says ‘economic benefits’ but it really should say ’stay in business,’” Hrycay said.
We’re making really good progress, he went on. The partnership’s next steps are best practices in alarms, and continuous loop auditing.
For the sep cells, he said, we are working on multivariable control strategies. “We are also beginning to work on a tailings pumps impeller erosion project, detecting when they are going to fail, and we are also working on production reporting calculation management to replace the spreadsheets with a web based calculation engine.”
The Yin and the Yang of Control
"New approach. Hah.”
“That’s all in the past, for me,” Zhu said. “What can we do now to extend MPC?”
The cost of MPC is too high, even for the refining/petrochem industry, and due to high cost, it is difficult to apply MPC in other process industries.
“Think about it: PID loops have been around for more than 50 years, and still half of them don’t work properly. That’s a real challenge.”
Zhu described what he called model intelligence: “Model intelligence is a class of computer programs that can, for a given class of processes, automatically develop, maintain and use dynamic process models for control, prediction, and monitoring/diagnosis.”
Model intelligence consists of three modules, a connectivity module (OPC), a modeling module (online identification), and an application module: MPC tuning/control, PID tuning/control.
“If you can develop models automatically, it is an automatic system, yes?”
Tai-Ji PID– A multi-loop PID auto-tuner (coming in the next release)
New Generation MPC– Tai-Ji MPC
We Can Do More
Model intelligence has a bright future, Zhu concluded. Adaptive MPC can reduce costs by a factor of 5; nonlinear MPC can be used for difficult processes; all PID loops will have diagnosis and auto-tuning; and all processes can benefit from MPC technology.
Jamie Errington talks about modern operator interfaces.
|Matrikon Operational Insight — Web-based Decision Support System
- Securely access all your data, regardless of location or format
- Seamlessly deliver information anywhere, anytime
- Create and monitor enterprise KPIs for decision support
- Enforce consistent operational and business processes
Featured Webcast — Collect, Trend and Analyze your OPC Data
How to Achieve Operational Excellence
To put the overall goal in context, Michel Ruel, president of Top Control, Matrikon’s teaching division, reported that 75% of process controls are manufacturing assets, and that 97% of control loops are PID. Likewise, there are 3 million PID controllers in North America, 5,000 multivariable process controllers (MPC) worldwide, and that 10-30% are in manual mode. In addition, Ruel added that:
Because even the simplest of the millions of existing loops is subject to many adverse affects over time, Ruel says control engineers performed trail by error in the 1970s, bump test mode and formulas in the 1980s, used tuning and optimization tool and software in the 1990s, and now are implementing Taiji automated testing, automated identification, and MPC and multi-PID functions. “This is removing the burden from the control engineer of working with plant operators to do so many application tests, but it still lets users examine many loops at the same time,” says Ruel.
Consequently, users are getting better at converting warehouse-loads of data to usable results, and saving money by finding bad actors, and fixing them. However, once this success is achieved, Ruel adds it can be even more of a challenge to get engineers to keep using these tools over the long term in the future.
“The half-life for performance improvements is usually about six months,” says Ruel. “After this time, user interest, support and regularity declines. So, while it’s important to secure process improvement successes, you must them quantify their value, spread the good news, and let people know about them. You need a champion to make improvements happen, but then you need coach to keep them alive over the long term.”
While many process applications have gone from reactive maintenance to scheduled maintenance to condition-based management, Ruel explains that condition-based management must be fed with appropriate data, prioritized diagnostics confirmation, and tracked and quantified results.
“Process engineers aren’t always so good at putting numbers on what they do, but they must do it anyway,” says Ruel.
Optimizing Production in the GoM
Even though it’s drilling in water up to 3,000 feet deep and deeper in the Gulf of Mexico (GoM), BP is still facing the same aging workforce issues as its land-based counterparts, according to Albino Castro, manager of BP’s GoM Field of the Future program.
“We’re using software to maximize the efficiency and effectiveness of our existing workforce,” says Castro. As a result, he adds that BP’s work efficiency is at an all-time high; integrity management has driven HSSE incidents to all-time lows; reserve recovery and production have exceeded past performance, and production efficiency has been recognized as the best in its class.
BP bases its process excellence pyramid on a foundation that includes common, reliable, secure infrastructure and data management of its reservoirs, wells, drilling facilities, production accounting, and HSSE capabilities. These capabilities are integrated by one-touch data accessibility, asset PKI dashboards, a web-based data system, hydrocarbon allocation, work process transformation, and real-time marine monitoring. Castro says this integration strives for optimization by communicating useful data to achieve right-time, full-field management.
“The key to all this is that, until you make the data visible to a wider audience, you can’t fix your process problems as well,” says Castro.
As it moves into ever deeper GoM locations, Castro says BP is building a fiber-optic network for its floating platforms, so it can provide real-time data for operators, as well as higher-level data for management. This includes BP’s Integrated Surveillance Information System (ISIS), which has subsurface and surface connections, and enables virtual metering of BP’s production processes.
“A lot of data comes in, and we need to secure one version of the truth,” says Castro. “So, we’ve also built an advanced collaboration environment to hook together our on-shore and off-shore teams. This environment provides off-shore line of sight, allows virtual team meetings, and aids dedicated on-shore teams. Besides providing on-shore remote operations support, this environment replicates off-short data in a read-only format, 24/7 video conferencing, and real-time monitoring of topside, marine, subsea, and drilling operations.
“We’ve really made some progress up our pyramid,” adds Castro. “Data that used to take three weeks to develop now take about five minutes, even though some people are already says that’s still too long.”
Air Products Gains with Alarming
David Espie, advanced control implementation manager for Air Products and Chemicals Inc., says his company recently used Matrikon’s ProcessGuard to help improve operations and overall production at an ethylene vinyl acetate co-polymer plant in Calvert City, Ky. This batch-based facility manufactures 1 million pounds per year of polymers for coating users, such as paper towel manufacturers.
Espie says Air Products’ main goals were to improve reduce off-spec product, increase capacity by reducing cycle times, and improve presentation and prioritization of alarms at the plant. “We’d previously lumped lots of signals that weren’t true alarms into this process application,” says Espie. “This plant also a legacy DCS system with no built-in alarm management mechanism, and it needed a cost-effective solution.”
Espie adds that ProcessGuard delivered several primary improvements, including low-cost integration with the DCS, turning alarm data into useful intelligence, and unlocking the plant’s hidden capacity. “We were able to use this alarm improvement as a precursor for going after more useful data, and really change job functions at the site,” he says.
For example, ProcessGuard captures alarm messages, and helped generate meter alarm data to reduce off-spec product at the plant. It also secures sequencing data and stores batch records, which reduces cycle time, phases steps, and quality of phase parameters.
In addition, ProcessGuard identifies alarm problems, frequency of alarms, and ranks alarm data. Espie says this allowed the plant’s managers to identify obvious nuisance alarms, and address then by changing alarm limits.
“We’ve also saved engineering time with early detection of faulty metering equipment, reduced inventory with first-time batch improvement, and reduced alarms 80% in two months,” he adds.
Top Control’s Michel Ruel preaches excellence.
Seeking Best Practices
A standard around alarm management can save industry serious money. The cost of abnormal situations comes down to a 3%-8% loss in throughput— $20 billion per year. Human error represents 42% of alarms, 36% come from equipment failure, 22% from the process.
Humans are a weak link. People make mistakes, plus there are now fewer people to monitor more things. This presents an necessity for DCS systems to present details to operators, Marvan said.
This new standard has changed definition of “alarm.” By definition, alarm requires operator intervention.
Marvan shared the philosophy behind the new proposed S18.02 standard: Alarm Management as a life cycle. “Alarm management is not a one-time thing,” he said. Adoption requires facilities to take ownership of the issue. Like addicts, they have to admit they have a problem. It may require a cultural change to adopt AM. The life cycle model is similar to that adopted by ISA S-84.01 when it was first adopted in 1996.
The standard will cover alarm philosophy, system requirements, rationalization and identification, basic alarm design, HMI design, advanced alarming techniques, implementation, training and operation, as well as maintenance, monitoring, assessment, and management of change. It will call for an audit process as well. The plan is to submit the draft for review in October of 2007, for adoption sometime in 2008. SP18 does not intend, however, to replace EEMUA.
OPC Catches Fugitive Emissions
Finn uses OPC to ensure a consistent high standard of environmental data. OPC is being used to automate manual plant processes currently in place using the Rainey Plant as a pilot facility. The goal of his project, he says, “is to reduce implementation costs by leveraging existing technology.”
Finn needs to guarantee the EPA-requried 98% uptime in emission reporting.
Finn says his overall goal is to see OPC used to improve the bottom line.
Santee Cooper is using OPC for turbine control. The vendor supplied the model and design, the maximum efficiency curves, “but the reality is that the plant is always operating in “off-design” conditions,” Finn says, “thus we need real-time knowledge of system for comparison. The traditional way to make these evaluations is to use rules of thumb and best practices.”
OPC DA lets him access performance models in real time and predict what efficiencies will be.
It gives the operators system-wide knowledge of all process values, performance targets, impact cost savings. Screens show actual values, best values (optimal), and actual money cost.
These are efficient tools for operators and engineers. Santee Cooper has ten units running with this system, and Finn’s future goal is to simplify the system even more.
Robin Goatey, process management specialist with Ameren Power and NERC CIP compliance project manager, and Rick Kaun, Matrikon’s industrial security and compliance group manager, presented a clear overview of cybersecurity as practiced in the power industry.
The evolution of de facto data centers in the process environment is an unintended result of the proliferation of exponential expansion of Windows systems. Little awareness of the security details of these systems. You don’t have one corporate data center, but perhaps dozens. According to Robin Goatey, “As we watched expansion of systems in the business environment, we’re seeing the same exponential expansion in control systems. They’re all data centers. In business IT, you have a set of expectations about what security will be. We don’t have the same expectations on the plant floor. Our goal is to instill a ‘data center’ culture in the plant.”
Starting a CIP-Compliance effort isn’t easy. There is new, unplowed ground here. Even the experts don’t know exactly how to get to CIP compliance and what that means on the factory floor. We are inventing this as we go along, Goatey admitted. Many definitions still need to be hammered out. What constitutes a critical system? This will be different at different plants. The industry needs to figure this out. Also have to balance security and capital expenditures.
The first step is Asset Identification/Gap Analysis. CIP 002 states the requirement to rank and categorize all critical assets through risk-based analysis. (Nobody has been able to really nail down what that means yet, Goatey avers.) Have to identify “critical cyber assets.” These are a subset of the critical assets.” There is no right answer to these questions. The answers have to be based on judgment by experts on the ground: Plant folks, IT, maybe even legal experts could be required.
Critical cyber assets have to have an electronic perimeter around them. It is required to have a firewall, an electronic camera monitoring it, and everyone with access to it has to have a background check. Identifying the scope of the program is very important. This is not a project, but a change in business processes. “Right now this affects the power industry, but it will eventually affect other verticals as well,” says Rick Kaun.
What is in the scope and what isn’t? How big a piece are you going to chew off? “This is the biggest challenge,” Kaun says.
You need a multi-disciplined team – a blend of security, process control, and other disciplines. “Cannot stress this point enough,” says Rick Kaun. The biggest impact to cost and effort of maintenance is in managing the program once established.
Mik Marvan explains the new ISA alarm standard.