Sabtu, 04 September 2010

What is a P&ID Piping and Instrumentation Diagram?

A Piping and Instrumentation Diagram - P&ID, is a schematic illustration of functional  relationship of piping, instrumentation and system equipment components.

P&ID shows all of piping including the physical sequence of branches, reducers, valves,  equipment, instrumentation and control interlocks. The P&ID are used to operate the process  system.

A P&ID should include:
- Instrumentation and designations 
- Mechanical equipment with names and numbers 
- All valves and their identifications 
- Process piping, sizes and identification 
- Miscellaneous - vents, drains, special fittings, sampling lines, reducers, increasers and  swagers 
- Permanent start-up and flush lines 
- Flow directions 
- Interconnections references 
- Control inputs and outputs, interlocks 
- Interfaces for class changes 
- Seismic category 
- Quality level 
- Annunciation inputs 
- Computer control system input 
- Vendor and contractor interfaces 
- Identification of components and subsystems delivered by others 
- Intended physical sequence of the equipment .

P&ID should not include:
- Instrument root valves 
- control relays 
- manual switches 
- equipment rating or capacity 
- primary instrument tubing and valves 
- pressure temperature and flow data 
- elbow, tees and similar standard fittings 
- extensive explanatory notes .

What are Automatic Guided Vehicles AVGs?

An automatic guided vehicle (AGV), also known as a self guided vehicle, is an unmanned, computer-controlled mobile transport unit that is powered by a battery or an electric motor. AGVs are programmed to drive to specific points and perform designated functions. They are becoming increasingly popular worldwide in applications that call for repetitive actions over a distance. Common procedures include load transferring, pallet loading/unloading and tugging/towing. Different models, which include forked, tug/tow, small chassis and large chassis/unit load, have various load capacities and design characteristics. They come in varying sizes and shapes, according to their specific uses and load requirements. 
  
AGVs have onboard microprocessors and usually a supervisory control system that helps with various tasks, such as tracking and tracing modules and generating and/or distributing transport orders. They are able to navigate a guide path network that is flexible and easy to program.

Various navigation methods used on AGVs include laser, camera, optical, inertial and wire guided systems. AGVs are programmed for many different and useful maneuvers, such as spinning and side-traveling, which allow for more effective production. Some are designed for the use of an operator, but most are capable of operating independently.

Corporations that use AGVs, often factories, warehouses, hospitals and other large facilities, benefit from the many advantages AGVs have to offer. One of the most beneficial is reduced labor costs. AGVs do not tire like human workers, and when their batteries are drained, charging the AGVs easily replenishes their energy. Loads that AGVs carry are far heavier than any single human could manage, which makes transporting heavy objects quick and simple. AGVs help give companies a competitive edge because they increase productivity and complete the job in an effective and time-efficient manner. They are flexible and can be adapted to many different needs. Also, using AGVs reduces damage to products and increases safety among workers.
  
Currently, AGVs are fairly pricey, and this discourages some companies, but in truth, the money is quickly earned back through reduction of other costs. Manufacturers of AGVs are working on reducing costs and making the units easier to understand to attract more potential buyers. Research on these vehicles is on-going, and new developments on software and movement techniques are frequently being made.

What is Robotics? - Robotics Industrial

Robotics is the science and technology of robots, their design, manufacture, and application. Robotics requires a working  knowledge of electronics, mechanics, and software and a person working in the field has become known as a roboticist.

Although the appearance and capabilities of robots vary vastly, all robots share the features of a mechanical, movable  structure under some form of control. The structure of a robot is usually mostly mechanical and can be called a kinematic  chain (its functionality being akin to the skeleton of a body). The chain is formed of links (its bones), actuators (its  muscles) and joints which can allow one or more degrees of freedom. Most contemporary robots use open serial chains in which  each link connects the one before to the one after it. These robots are called serial robots and often resemble the human  arm. Some robots, such as the Stewart platform, use closed parallel kinematic chains. Other structures, such as those that  mimic the mechanical structure of humans, various animals and insects, are comparatively rare. However, the development and  use of such structures in robots is an active area of research (e.g. biomechanics).

Robots used as manipulators have an end  effector mounted on the last link. This end effector can be anything from a welding device to a mechanical hand used to  manipulate the environment.

The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct  phases - perception, processing and action (robotic paradigms). Sensors give information about the environment or the robot  itself (e.g. the position of its joints or its end effector). Using strategies from the field of control theory, this  information is processed to calculate the appropriate signals to the actuators (motors) which move the mechanical structure.  The control of a robot involves various aspects such as path planning, pattern recognition, obstacle avoidance, etc. More  complex and adaptable control strategies can be referred to as artificial intelligence.

Any task involves the motion of the robot. The study of motion can be divided into kinematics and dynamics. Direct kinematics  refers to the calculation of end effector position, orientation, velocity and acceleration when the corresponding joint  values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end  effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different  possibilities of performing the same movement), collision avoidance and singularity avoidance. Once all relevant positions,  velocities and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the  effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the  applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the  calculation of the actuator forces necessary to create a prescribed end effector acceleration. This information can be used  to improve the control algorithms of a robot.

Handbook of Industrial Robotics

About the Handbook of Industrial Robotics, Second Edition:

"Once again, the Handbook of Industrial Robotics, in its Second Edition, explains the good ideas and knowledge that are needed for solutions." -Christopher B. Galvin, Chief Executive Officer, Motorola, Inc.

"The material covered in this Handbook reflects the new generation of robotics developments. It is a powerful educational resource for students, engineers, and managers, written by a leading team of robotics experts." - Yukio Hasegawa, Professor Emeritus, Waseda University, Japan.

"The Second Edition of the Handbook of Industrial Robotics organizes and systematizes the current expertise of industrial robotics and its forthcoming capabilities. These efforts are critical to solve the underlying problems of industry. This continuation is a source of power. I believe this Handbook will stimulate those who are concerned with industrial robots, and motivate them to be great contributors to the progress of industrial robotics." -Hiroshi Okuda, President, Toyota Motor Corporation.

"This Handbook describes very well the available and emerging robotics capabilities. It is a most comprehensive guide, including valuable information for both the providers and consumers of creative robotics applications." -Donald A. Vincent, Executive Vice President, Robotic Industries Association

120 leading experts from twelve countries have participated in creating this Second Edition of the Handbook of Industrial Robotics. Of its 66 chapters, 33 are new, covering important new topics in the theory, design, control, and applications of robotics. Other key features include a larger glossary of robotics terminology with over 800 terms and a CD-ROM that vividly conveys the colorful motions and intelligence of robotics. With contributions from the most prominent names in robotics worldwide, the Handbook remains the essential resource on all aspects of this complex subject.


SHIMON Y. NOF, a recognized expert in robotics research and applications, is Professor of Industrial Engineering at Purdue University's School of Industrial Engineering.



Jumat, 03 September 2010

What are Calibration Services?

Calibration services are available to adjust instruments and promote proper response from calibration laboratories and  companies that provide mobile service. Accuracy is crucial to the proper function of precision and measurement tools and  devices in many industries today. Thus, the routine calibration of such items is an important part of maintaining the needed  accuracy and quality standards. Calibration is typically accomplished by measuring the behavior of a specific device with a  monitoring instrument. This instrument allows the calibrator to compare the measured behavior with the standard at which the  device should be. Needed adjustments are then made to the device until it is back in line with the specified standard for the  instrument.
 
Instruments that need to maintain a specific unit of measurement to function properly and instruments that monitor variations  of measurement themselves need to be calibrated regularly. An industrial scale is an example of a device that monitors weight  and after time will become less accurate due to machine wear. A commercial oven, used for the mass production of baked goods,  also needs to be calibrated. Lasers, as well, need to be calibrated regularly in order to maintain the high accuracy needed  to perform their various functions.


Examples of measurements that need to be calibrated in specific devices are torque, humidity, temperature, pressure, strain,  speed, displacement and mass. The standards for these measurements are defined and agreed upon by national and international  standard organizations, such as the ISO (International Standard Organization). These standards are important for local and  global trade and ensure ethical practices within the market. Calibration services are a necessary bi-product of these  standards.
 
Although calibration services are an effective way to increase the quality of various devices, they are not always one  hundred percent perfect, because it is not possible to know every factor that may affect the calibration. This uncertainty  must always be taken into account when considering such operations. There is always the possibility of error, even with the  most sensitive devices. Furthermore, the calibration is only as good as the standards that it is guided by, so it is  important to know exactly what must be done to generate accurate performance and measurements.

What is Instrument Calibration?

Instrument Calibration refers to the process of determining the relation between the output (or response) of a measuring instrument and the value of the input quantity or attribute, a measurement standard. In non-specialized use, calibration is often regarded as including the process of adjusting the output or indication on a measurement instrument to agree with value of the applied standard, within a specified accuracy. For example, a thermometer could be calibrated so the error of indication or the correction is determined, and adjusted (e.g. via calibration constants) so that it shows the true temperature in Celsius at specific points on the scale.

In the United States, the National Institute of Standards and Technology, a part of the federal government, maintains standards and is considered the arbiter and ultimate (in the U.S.) authority for values of SI units and industrial standards. NIST also provides traceability to its standards by calibration, by which an instrument's accuracy is established by comparing, in an unbroken chain, to higher level standards, e.g. the standards maintained by NIST.

Instrument calibration service providers calibrate various instruments including many types of process monitoring devices and analytical equipment including, flow instruments such as flow meters and sensors, gauges, totalizers or valve position indicators; pressure and vacuum instruments such as pressure sensors or gauges, meters, transducers or vacuum pumps; force, weight or mass instruments including strain gauges, load cells, scales or torque monitors; temperature instruments including thermocouple, RTD or thermistor type devices; humidity instruments including absolute or relative humidity, moisture content or dew point measuring devices; multimeter or electrical meters, either analog or digital; physical or dimensional instruments such as calipers, and micrometers, fiber optic or lightwave instruments including multiplexers, analyzers, isolators etc; rf or microwave instruments such as transmitters, receivers, antennas etc; generators; power supplies including any AC or DC power supply or conditioners; oscilloscopes or scopes or chart recorders; and signal or function analyzers.
Specific services offered by providers of instrument calibration services include, rapid turnaround, on-site calibration, pick up and delivery, calibration documentation, in-house contract lab services, and online documentation.  Rapid turnaround means the supplier offers quick turnaround on instrument calibration services, typically in a few days.  A supplier offering on-site calibration has personnel and/or equipment for on-site calibration work, eliminating the added expense of taking the instrument off line and shipping it.  The supplier offers pick-up and delivery services to minimize cost and time associated with using in-house personnel.  Documentation or test reports show calibration information such as "as found" and "as left" data, next scheduled calibration, etc with calibration documentation.  A supplier that offers in-house contract lab services has capabilities and resources for setting up an in-house contract lab for supplier - minimizing any downtime or lag in getting instruments quickly calibrated.  Supplier has online documentation system to access history, calibration certifications and recalibration notifications.

What is Calibration Management Software?

Calibration management software is used to produce documentation with test equipment calibration results. It is also used to produce calibration reports and calibration certificates. In addition, calibration management software provides database functions such as lookups of calibration procedures and calibration services.

Calibration management software generates value-added services to companies by reducing downtime of critical assets, expanding mean-time between repairs (MTBR), and increasing productivity. Common users of calibration software include automotive manufacturers, pharmaceutical firms, providers of power and energy, and those in the food and beverage industries. Calibration monitor software also has known applications for private and government organizations that conduct safety standards testing. Users implement such applications to monitor everything from temperature and thermodynamic outputs to fluid flow rates, pressures, and chemical mixtures.

Calibration management software is typically designed for deployment on single-workstation systems and portable devices, or across closed network systems. This allows the software to be used for monitoring processes involving electrical, electronic, computerized, and robotic manufacturing managed through calibration workstations and test benches. Calibration certificate acquisition is also an important part of the calibration management process for most corporations. Calibration certification applicable to adequate calibration management software includes the current good manufacturing process (cGMP) standards for drug and food preparation as issued by the U.S. Food and Drug Administration (FDA), as well as associated record keeping under CFR 21, and standards in calibration services as defined by the International Organization for Standardization (ISO) 17025 specification. The more detailed documentation and database interface functionality of calibration management software largely depends on the type of calibration required. The user requirements are very detailed in different industries, and practices are more stringently regulated on calibration of machined parts in public transportation industries, and measurements during food and drug preparation, then say on local machining and internal test equipment manufacturers.

Calibration management software suppliers are typically highly specialized in a given industry. Some vendors write basic software to comply with a variety of different industries and then provide customized packages depending on customer requirements. As such, this integrates software development with calibration services and tends to make much of the commercially-available calibration management software relatively expensive.


What are Level Sensors?

Level sensors are used to detect liquid or powder levels, or interfaces between liquids. These level measurements can be either continuous or point values represented with various output options. Continuous level sensors are devices that measure level within a specified range and give output of a continuous reading of level. Point level sensors devices mark a specific level, generally used as high alarm or switch.

Multiple point sensors can be integrated together to give a stepped version of continuous level. These level sensors can be either plain sensors with some sort of electrical output or else can be more sophisticated instruments that have displays and sometimes computer output options. The measuring range is probably the most important specification to examine when choosing a level sensor. Field adjustability is a nice feature to have for tuning the instrument after installation.

Depending on the needs of the application, level sensing devices can be mounted a few different ways. These sensors can be mounted on the top, bottom or side of the container holding the substance to be measured. Among the technologies for measuring level are air bubbler technology, capacitive or RF admittance, differential pressure, electrical conductivity or resistivity, mechanical or magnetic floats, optical units, pressure membrane, radar or microwave, radio frequency, rotation paddle, ultrasonic or sonic and vibration or tuning fork technology. Analog outputs level sensors can be current or voltage signals. Also possible is a pulse or frequency. Another option is to have an alarm output or a change in state of switches. Computer signal outputs that are possible are usually serial or parallel. Level sensors can have displays that are analog, digital or video displays. Control for the devices can be analog with switches, dials and potentiometers; digital with menus, keypads and buttons; or controlled by a computer.

What is a Flowmeter?

Flow measurement is the quantification of bulk fluid or gas movement. It can be measured in a variety of ways.

Volumetric flow rate is sometimes measured in "standard cubic centimeters per minute" (abbreviation sccm), a unit acceptable  for use with SI except that the additional information attached to the unit symbol. The SI standard would be m3/s (with any  appropriate prefix, with temperature and pressure specified). The term "standard" indicates that the given flow rate assumes  a standard temperature and pressure. Many other similar abbreviations are also in use, such as standard cubic feet per minute  or per second. Other units used include gallons (U.S. liquid or imperial) per minute, liters per second, bushels per minute,  and acre-feet per day.

Another method of flow measurement involves placing an object (called a shedder bar) in the path of the fluid. As the fluid  passes this bar, disturbances in the flow called vortices are created.

The vortices trail behind the cylinder in two rolls,  alternatively from the top or the bottom of the cylinder. This vortex trail is called the Von Kármán vortex street after von  Karman's 1912 mathematical description of the phenomenon. The speed at which these vortices are created is proportional to  the flow rate of the fluid. Inside the shedder bar is a piezoelectric crystal, which produces a small, but measurable,  voltage pulse every time a vortex is created. The frequency of this voltage pulse is also proportional to the fluid flow  rate, and is measured by the flowmeter electronics.

Modern innovations in the measurement of flow rate incorporate electronic devices that can correct for varying pressure and  temperature (i.e. density) conditions, non-linearities, and for the characteristics of the fluid.

What are Pressure Transmitters?

Pressure transducers are devices that convert the mechanical force of applied pressure into electrical energy. This electrical energy becomes a signal output that is linear and proportional to the applied pressure. Pressure transducers are very similar to pressure sensors and transmitters. In fact, transducers and transmitters are nearly synonymous. The difference between them is the kind of electrical signal each sends. A transducer sends a signal in volts (V) or millivolt per volt (mV/V), and a transmitter sends signals in milliamps (mA). 
  
Both transmitters and transducers convert energy from one form to another and give an output signal. This signal goes to any device that interprets and uses it to display, record or alter the pressure in the system. These receiving devices include computers, digital panel meters, chart recorders and programmable logic controllers. There are a wide variety of industries that use pressure transducers and transmitters for various applications. These include, but are not limited to, medical, air flow management, factory automation, HVAC and refrigeration, compressors and hydraulics, aerospace and automotive.

There are important things to consider when deciding what kind of pressure transducer to choose. The first consideration is the kind of connector needed to physically connect the transducer to a system. There are many kinds of connectors for different uses, including bulletnose and submersible connectors, which have unique applications. Another important part is the internal circuitry of the transducer unit, which is housed by a "can" that provides protection and isolates the electronics. This can be made of stainless steel or a blend of composite materials and stainless steel. The various degrees of protection extend from nearly no protection (an open circuit board) to a can that is completely submersible in water. Other kinds of enclosures safeguard the unit in hazardous areas from explosions and other dangers.
 
The next thing to consider is the sensor, which is the actual component that does the work of converting the physical energy to electrical energy. The component that alters the signal from the sensor and makes it suitable for output is called the signal conditioning circuitry. The internal circuitry must be resistant to harmful external energy like radio frequency interference, electromagnetic interference and electrostatic discharge. These kinds of interferences can cause incorrect readings, and are generally to be avoided when doing readings. Overall, pressure transducers are well-performing and high-accuracy devices that make life easier for many industries.

What are Pressure Controllers?

Pressure controllers are used to regulate positive or negative (vacuum) pressure. They receive pressure sensor inputs, provide control functions, and output control signals. Pressure controllers use several control types. Limit controls protect personnel and equipment by interrupting power through a load circuit when pressure exceeds or falls below a set point.

Advanced controls use non-linear control strategies such as adaptive gain, dead-time compensation, and feed-forward control. Linear controls use proportional, integral and derivative (PID) control; proportional and integral (PI) control; proportional and derivative (PD) control; or proportional (P) control. PID control uses an intelligent input/output (I/O) module or program instruction for automatic closed-loop operation. PI control integrates error signaling for steady-state or offset errors. By contrast, PD control differentiates error signals to derive the rate of change. PD control increases the speed of controller response, but can be noisy and decrease system stability.

Pressure controllers differ in terms of performance specifications, control channels, control signal outputs, and sensor excitation supply. Performance specifications include adjustable dead-band or hysteresis, minimum and maximum set points, update rate or bandwidth, and percentage accuracy. Hysteresis or switching differential is the range through which an input can be changed without causing an observable response. Hysteresis is usually set around the minimum and maximum end points. Control channel specifications for pressure controllers include the number of inputs, outputs, and feedback loops. Multi-function controllers and devices with multiple, linked looped are commonly available. Control signal outputs include analog voltages, current loops, and switched outputs. Some controllers power sensors with voltage levels such as 0 5 V or 0 10 mV. Others power sensors with current loops such as 0 20 mA, 4 20 mA, or 10 50 mA.

Selecting pressure controllers requires an analysis of discrete I/O specifications, user interface options, and special features. Devices differ in terms of the total number of inputs, total number of outputs, and total number of discrete or digital channels. Some pressure controllers provide alarm outputs or are designed to handle high power. Others are compatible with transistor-transistor logic (TTL). Analog user interfaces provide inputs such as potentiometers, dials and switches. Digital user interfaces are set up or programmed with a digital keypad or menus. Pressure controllers with a graphical or video display are commonly available. Devices that include an integral chart recorder can plot data on a strip chart, in a circular pattern, or on a video display. Special features for pressure controllers include self-tuning, programmable set points, signal computation or filters, and built-in alarms or indicators.




What are Temperature Transmitters?

Temperature measurement using modern scientific thermometers and temperature scales goes back at least as far as the early  18th century, when Gabriel Fahrenheit adapted a thermometer (switching to mercury) and a scale both developed by Ole  Christensen Røemer. Fahrenheit's scale is still in use, alongside the Celsius scale and the Kelvin scale.

Many methods have been developed for measuring temperature. Most of these rely on measuring some physical property of a  working material that varies with temperature. One of the most common devices for measuring temperature is the glass  thermometer. This consists of a glass tube filled with mercury or some other liquid, which acts as the working fluid.  Temperature increases cause the fluid to expand, so the temperature can be determined by measuring the volume of the fluid.  Such thermometers are usually calibrated, so that one can read the temperature, simply by observing the level of the fluid in  the thermometer. Another type of thermometer that is not really used much in practice, but is important from a theoretical  standpoint is the gas thermometer.

Temperature transmitters, RTD, convert the RTD resistance measurement to a current signal, eliminating the problems inherent in RTD signal transmission via lead resistance. Errors in RTD circuits (especially two and three wire RTDs) are often caused by the added resistance of the leadwire between the sensor and the instrument. Transmitter input, specifications, user interfaces, features, sensor connections, and environment are all important parameters to consider when searching for temperature transmitters, RTD.

Transmitter input specifications to take into consideration when selecting temperature transmitters, RTD include reference materials, reference resistance, other inputs, and sensed temperature. Choices for reference material include platinum, nickel or nickel alloys, and copper. Platinum is the most common metal used for RTDs - for measurement integrity platinum is the element of choice. Nickel and nickel alloys are very commonly used metal. They are economical but not as accurate as platinum. Copper is occasionally used as an RTD element. Its low resistivity forces the element to be longer than a platinum element. Good linearity and economical. Upper temperature range typically less than 150 degrees Celsius. Gold and Silver are other options available for RTD probes - however their low resistivity and higher costs make them fairly rare, Tungsten has high resistivity but is usually reserved for high temperature work. When matching probes with instruments - the reference resistance of the RTD probe must be known. The most standard options available include 10 ohms, 100 ohms, 120 ohms, 200 ohms, 400 ohms, 500 ohms, and 1000 ohms. Other inputs include analog voltage, analog current, and resistance input. The temperature range to be sensed and transmitted is important to consider.

Important transmitter specifications to consider when searching for temperature transmitters, RTD, include mounting and output. Mounting styles include thermohead or thermowell mounting, DIN rail mounting, and board or cabinet mounting. Common outputs include analog current, analog voltage, and relay or switch output. User interface choices include analog front panel, digital front panel, and computer interface. Computer communications choices include serial and parallel interfaces. Common features for temperature transmitters, RTD, include intrinsically safe, digital or analog display, and waterproof or sealed. Sensor connections include terminal blocks, lead wires, screw clamps or lugs, and plug or quick connect. An important environmental parameter to consider when selecting temperature transmitters, RTD, is the operating temperature.

What are Temperature Controllers?

Temperature controllers accept inputs from temperature sensors or thermometers, and output a control signal to keep the  temperature at the desired level. Temperature controllers use several different control techniques. Limit control establishes  set points that, when reached, sends a signal to stop or start a process variable. Linear control matches a variable input  signal with a correspondingly variable control signal. Feedforward control does not require a sensor and provides direct  control-compensation from the reference signal.

Proportional, integral and derivative (PID) control requires real-time system  feedback. PID control monitors the error between the desired variable value and the actual value, and adjusts the control  accordingly. Fuzzy logic is a control technique in which variables can have imprecise values (as in partial truth) rather  than a binary status (completely true or completely false). Temperature controllers that use advanced or non-linear controls  such as neural networking, adaptive gain, or emerging algorithms are also available.

Specifications for temperature controllers include number of inputs, number of outputs, input types, output types, and number  of zones (if applicable). The number of inputs is the total number of signals sent to the temperature controller. The number  of outputs is the sum of all outputs used to control, compensate or correct the process. Input types for temperature  controllers include direct current (DC) voltage, current loops, analog signals from resistors or potentiometers, frequency  inputs, and switch or relay inputs. Output types include analog voltage, current loops, switch or relay outputs, and pulses  or frequencies. Some temperature controllers can also send inputs or receive outputs in serial, parallel, Ethernet or other  digital formats which indicate a process variable. Others can send inputs and receive outputs from information converted to  an industrial fieldbus protocol such as CANbus, PROFIBUS®, or SERCOS.

Temperature controllers differ in terms of user interface features and regulatory compliance. Many temperature controls  feature a digital front panel or analog components such as knobs, switches, and meters. Computer-programmable, web-enabled,  and Ethernet or network-ready temperature controllers are also available. In terms of compliance, a temperature control that  is destined for sale in the European marketplace should meet the requirements of the Restriction of Hazardous Substances  (RoHS) and Waste Electrical and Electronics Equipment (WEEE) directives from the European Union (EU).


Kamis, 02 September 2010

Batch S88 Standard Overview

BATCH Standard
Standards exist to facilitate understanding and communication between the people involved. They help optimize revenue and you don't have to start from scratch for every new project. ANSI/ISA S88 is no exception. Although the benefits of S88 may be obvious to some, they are not necessarily as obvious to the decision makers.

S88 defines hierarchical recipe management and process segmentation frameworks, which separate products from the processes that make them. The standard enables re-use and flexibility of equipment and software and provides a structure for coordinating and integrating recipe-related information across the traditional ERP, MES and control domains.


In order to justify the use of S88, it is important to link its technical benefits to the business benefits of such an implementation.

S88 isolates recipes from equipment. When the software (S88-compliant or otherwise) that defines a product (recipe procedure) and the software to run equipment (phase logic) are in the same device (such as a PLC or DCS), the two different sets of code eventually become indistinguishable and, in some cases, inseparable. This makes recipe and equipment control difficult, if not impossible, to maintain. Every additional ingredient and process improvement can lead to lengthy and error-prone software changes. Documenting and validating such a system is also extremely difficult, and doubly so if not S88-compliant.

If recipes are kept separate from equipment control, however, the manufacturing process is more flexible and can provide significant advantages: Automation engineers can design control software based on the full capabilities and performance of the equipment rather than on the requirements of the product. Similarly, scientists, process engineers or lead operators who create the recipes can now easily create and edit them.
Separating products from the process that makes them raises the bar on plant flexibility. Equipment that may currently be constrained to one or a limited number of products may be able to accommodate more, improving overall equipment effectiveness (OEE) metric - a metric that has become the crucial link between the financial and operational performance of production assets.

Batch S88.01 ANSI/ISA Standard

Pharmaceutical Validation

Pharmaceutical Validation is the implementation of a quality system approved by the FDA.

It is designed to ensure that every part of a pharmaceutical installation meets the standards during every stage of its lifecycle (design, construction, startup, production, maintenance and decomissioning). Keywords are "Good practices" and "Traceability". It is a very large field and some parts of it may not require much technical knowledge. However, if you have to prove that a certain program in a controller will act in certain ways, it may be useful to at least know how these things work in order to design a testing procedure to prove (validate) that this is true.

Pharmaceutical Validation is always teamwork. Mechanical, electrical, automation, control, process and software engineers each have to assist the validation officer with all the necessary documentation and testing. The only person that is not really a technical specialist is the validation officer. However to be successful, he must have a broad knowledge of technical stuff without real specialized knowledge in a specific field. His main job task is the interfacing between the technical staff and the FDA and therefore he is responsible for the final FDA approval of the entire project.

Check out the 21-CFR-Part 11 regulations (Electronic Records, Electronic Signatures), which has become the standard for the pharmaceutical industry.


ANSI/ISA S88 design concepts make validation easier. A modular S88 design will allow one to validate (and revalidate) procedures and equipment separately. With well-written equipment phases, for example, once Phase A is validated, modifying other phases will not upset Phase A's validated state.
Also, validating a recipe procedure is easier once the phases are validated. Since recipe procedure code is decoupled from equipment phase code, the need to revalidate a recipe procedure does not necessarily require that all phases be revalidated. Theoretically, one can validate new additions to a process or revalidate changes to a process faster. Moreover, the S88 modular design approach helps minimize the risk that a change to one part of the process will affect another. Validating new additions to a process or revalidating changes to a process faster, results in the product making it to the market quicker.

IQ / OQ / PQ

Installation Qualification, Operational Qualification, and Performance Verification of equipment and instruments is a vital link in the quality chain.  One of the major areas of focus for both FDA (or any other regulatory body) and client audits.  All equipment and instruments must meet manufacturer's or preset standards for operation and performance.

Installation Qualification does not just apply to a brand new piece of equipment or instrument.  A used instrument or piece of equipment that is new to the site must also undergo installation qualification.  Movement of equipment or instruments requires re-Installation Qualification to ensure proper operation.  Many new pieces of equipment and instruments include either instructions for IQ or options for the manufacturer to perform IQ for you.

Operation Qualification and Performance Verification must be perform at least annually for every piece of equipment or instrument.  Most manufacturers have provisions for performing these services, at a cost.  This can be confusing, time consuming and costly to allow the manufacturer to perform.

Performance Verification is also necessary for movement of equipment and instruments to prove the operation prior to and after the movement of said equipment and instruments.



What are Laboratory Information Management Systems LIMS?


Laboratory information management systems LIMS software is used to manage data in scientific and commercial laboratories. LIMS software enables scientists and other technical personnel to track samples and specimens during each step of the analytical process, from performing tests to examining test results, to tracking control limits and quality control (QC) values. LIMS software varies by application, but is designed to help laboratories record, manage and organize large collections of data for rapid search and retrieval.

LIMS vendors offer many different types of laboratory information management systems LIMS software. Some products reflect a standard workflow that is common to many scientific and commercial laboratories. Others provide customized modeling functions that reflect client specifications. Although turnkey LIMS software is less expensive, the selection of a customized system may lengthen the LIMS implementation schedule. The proper selection of LIMS software requires a careful analysis of LIMS requirements.


Buyers need to determine data management needs, examine regulatory requirements, and inquire about integration with existing enterprise systems. Costs can be reduced if existing hardware can be reused. LIMS systems that combine third party software and laboratory information management systems LIMS software can share data accurately and integrate strategic platforms for the sharing of lab information and other electronic content.

Laboratory information management systems LIMS software enables scientific and commercial laboratories to improve various quality control QC and quality assurance QA procedures by automating activities such as entering data from instruments. Laboratory information management systems LIMS software can also be used to help laboratories achieve accreditation based on the technical requirements of ISO/IEC 17025, "General Requirements for the Competence of Testing and Calibration Laboratories". ISO is an abbreviation for the International Standards Organization ISO. IEC is an abbreviation for the International Electrotechnical Commission IEC.

What is OSHA Compliance?


The Occupational Safety and Health (OSH) Act was enacted to "assure safe and healthful working conditions for working men and women." The OSH Act created the Occupational Safety and Health Administration (OSHA) at the federal level and provided that states could run their own safety and health programs as long as those programs were at least as effective as the federal program. Enforcement and administration of the OSH Act in states under federal jurisdiction is handled primarily by OSHA. Safety and health standards related to field sanitation and certain temporary labor camps in the agriculture industry are enforced by the U.S. Department of Labor (DOL) Employment Standards Administration's Wage and Hour Division (WHD) in states under federal jurisdiction.

OSHA offers a variety of compliance assistance and outreach products and services to help employers prevent and reduce workplace fatalities, illnesses, and injuries. These include compliance assistance information, publications and tools; education and training courses; cooperative programs for organizations to collaborate with OSHA; free onsite consultation services; and the services of compliance assistance specialists who provide information and training about OSHA requirements.

The OSHA website at www.osha.gov provides information on all OSHA activities and programs, including OSHA laws and regulations, news and events, interactive software called "eTools," posters and publications, education and training programs, cooperative programs, and agency contact information.
What is GxP ?
The term GxP is a generalization of quality guidelines, predominantly used in the pharmaceutical industry.

• Good Manufacturing Practice, or GMP
• Good Engineering Practice, or GEP
• Good Laboratory Practice, or GLP
• Good Safety Practice, or GSP
• Good Clinical Practice, or GCP
• Good Distribution Practice, or GDP

GMP is the most commonly known instance of GxP. The term GxP is only used in a casual manner, to abstract from the actual set  of quality guidelines.

The purpose of the GxP quality guidelines is to ensure a quality product, guiding pharmaceutical product research,  development and manufacturing, but also presents a codex for much of the activities off the critical path.

The most central aspects of GxP are:
Traceability: the ability to reconstruct the development history of a drug.
Accountability: the ability to resolve who has contributed what to the development, and when.

Documentation is thus the most crucial instrument.
What is cGMP?
GMP refers to the Good Manufacturing Practice Regulations promulgated by the US Food and Drug Administration under the  authority of the Federal Food, Drug, and Cosmetic Act (See Chapter IV for food, and Chapter V, Subchapters  A, B, C, D, and E  for drugs and devices.) These regulations, which have the force of law, require that manufacturers, processors, and packagers  of drugs, medical devices, some food, and blood take proactive steps to ensure that their products are safe, pure, and  effective. GMP regulations require a quality approach to manufacturing, enabling companies to minimize or eliminate instances  of contamination, mixups, and errors.  This in turn, protects the consumer from purchasing a product which is not effective  or even dangerous. Failure of firms to comply with GMP regulations can result in very serious consequences including recall,  seizure, fines, and jail time.

GMP regulations address issues including recordkeeping, personnel qualifications, sanitation, cleanliness, equipment  verification, process validation, and complaint handling.
Most GMP requirements are very general and open-ended, allowing  each manufacturer to decide individually how to best implement the necessary controls. This provides much flexibility, but  also requires that the manufacturer interpret the requirements in a manner which makes sense for each individual business.

GMP is also sometimes referred to as "cGMP". The "c" stands for "current," reminding manufacturers that they must employ  technologies and systems which are up-to-date in order to comply with the regulation. Systems and equipment used to prevent  contamination, mixups, and errors, which may have been "top-of-the-line" 20 years ago, may be less than adequate by today's  standards.
What is a Clean Room?
A cleanroom is an environment, typically used in manufacturing or scientific research, that has a low level of environmental pollutants such as dust, airborne microbes, aerosol particles and chemical vapors. More accurately, a cleanroom has a controlled level of contamination that is specified by the number of particles per cubic meter and by maximum particle size.

Cleanrooms can be very large. Entire manufacturing facilities can be contained within a cleanroom with factory floors covering thousands of square meters. They are used extensively in semiconductor manufacturing, biotechnology, the life sciences and other fields that are very sensitive to environmental contamination.

The air entering a cleanroom from outside is filtered to exclude dust, and the air inside is constantly recirculated through high efficiency particulate air (HEPA) and ultra low penetration air (ULPA) filters to remove internally generated contaminants. Staff enter and leave through airlocks (sometimes including an air shower stage), and wear protective clothing such as hats, face masks, boots and cover-alls.
Equipment inside the cleanroom is designed to generate minimal air contamination. Common materials such as paper, pencils, and fabrics made from natural fibers are often excluded. Low-level cleanrooms are often not sterile (i.e., free of uncontrolled microbes) and more attention is given to airborne particles. Particle levels are usually tested using a particle counter.

Some cleanrooms are kept at a higher air pressure so that if there are any leaks, the air rushes outside. This is similar to the lower pressure used in biological hot zones to keep the microbes inside. Cleanroom HVAC systems often control the humidity to low levels, such that extra precautions are necessary to prevent electrostatic discharges. Entering a cleanroom usually requires wearing a cleanroom suit.

In cheaper cleanrooms, in which the standards of air contamination are less rigorous the entrance to the cleanroom can be without the air shower. There is an anteroom, in which the special suits have to be put on, but then a person can walk in directly to the room (as seen in the photograph on the right).
What is 21 CFR Part 11?
21 CFR Part 11 is standard that was developed in order to facilitate and encourage the wider use of technology in the manufacturing of medicinal products. Prior to August 20th 1997, all information directly relating to the manufacture of medicinal products had to be stored in hard copy. Batch records and process steps had to be manually signed (and dated) by authorised personnel. 21 CFR Part 11 defines the minimum criteria required to make electronic records and electronic signatures trustworthy, reliable and generally equivalent to handwritten documents and signatures.

There are many requirements to make a system 21 CFR Part 11 compliant and they are all spelt out in the CFR itself. In summary however the system must be able to securely and transparently handle electronic information so that it cannot be altered or doctored to falsify results without leaving an audit trail. To facilitate this, a system must be able to:
1. Log the time, date and id of person making an entry into the system (audit trail)
2. Ensure that only authorised persons can access the systems (access levels, data encryption)
3. Support two token signatures (userid and password)
4. Protect and ensure uniqueness of signatures (password database encryption and management)
5. Record and protect against unauthorised access attempts into the system.

When 21 CFR Part 11 was released in 1997, it was hailed as a landmark regulation that finally made electronic records and signatures as valid as paper records and handwritten signatures. It allows the use of electronic record-keeping systems in complying with regulations. Part 11 (also known as "Electronic Records; Electronic Signatures" or ERES) works in tandem with a predicate rule, which refers to any FDA regulation that requires organizations to maintain records.

It is not possible for any vendor to offer a turnkey 'Part 11 compliant system'. Any vendor who makes such a claim is incorrect. Part 11 requires both procedural controls (i.e. notification, training, SOPs, administration) and administrative controls to be put in place by the user in addition to the technical controls that the vendor can offer. At best, the vendor can offer an application containing the required technical requirements of a compliant system.

Selasa, 24 Agustus 2010

Sensors in HVAC ApplicationsSaving Energy

Sensor in HVAC Hierarchy





Sensor Sales by €Sales and %




Road Show Sensor



Temperature Technologies




Energy Saving from Temperature Control
Heating:
Each one Degree reduction in set value is equal to a financial reduction of approximately €500 / month for each 1000m2 of floor space.
Cooling:
Each one Degree increase in set value is equal to a financial reduction of approximately €800 / month for each 1000m2 of floor space.
Or each 1 Degree C = 10% of total energy cost.
Ref: BRE (UK)
    
Humidity Energy Savings
No direct energy savings, but….
Achieved by combining with temperature to measure enthalpy
Enthalpy = Total Heat Content controlled to provide ‘Free Cooling’
Te = T + L/cp * w/(1-q) Joules
WhereTe = Heat Content
T = Measured Temperature
L = Latent Heat of vaporisation
Cp = Specific Heat of Dry Air –Constant pressure
q = Humidity


Air Quality –VOC Sensors

  VOCs (Volatile Organic Compounds)Made from a metal oxide (SnO2) wrapped around a ceramic heater.When heated, oxygen is absorbed on the surface. Donor electrons are transferred to the absorbed oxygen creating a positive charge in a space charge layer. Surface potential is formed serving as a potential barrier against electron flow. i.e. we have variable resistance proportional to VOCs.This device reacts to a wide variety of gasses, some are desirable for energy saving, others are not.


Air Quality CO2 Sensors
CO2
IR absorption of radiation: non-dispersive infrared (NDIR) technique.
Molecules absorb light (electromagnetic energy) at spectral regions where the radiated wavelength coincides with internal molecular energy levels. By detecting the amount of light absorbed, within a narrow bandwidth that coincides with the resonance wavelength of the species selected, the number of molecules that are free from interference of other species can be measured.

CO2 Energy Saving
Fresh air supply can be adjusted proportional to the CO2 level, which in turn is proportional to the human activity within a space.
In times of low or no occupancy, full re-circulated air can be used saving over 20% of the energy consumption.
Most energy saving is from main plant installations. Individual room control is more for comfort conditions.


EUBAC Certification
Purpose: To apply performance criteria to HVAC devices and ensure that they meet minimum energy and accuracy targets.3rdParty testing of all systems and components, including valves, actuators and sensors.Test houses MUST be independent and be certified to ISO 17021 (Testing Accreditation), and have invested in proper HVAC test equipment.Certificates available on http://www.eubac.org/