Senin, 08 Maret 2010

General Commissioning Procedure for DDC Systems


Foreword

Understanding a facility owner's global decision priorities that underlie project design intent is a key element in commissioning any system. Global priorities include: first cost, comfort, operating costs, reliability, return on investment, support for the environment and special owner needs. These priorities help focus commissioning activities into areas that meet the owner's needs. Understanding these priorities will both aide defining acceptance criteria for evaluating compliance with design intent as well as determining which verification checks and functional tests need to be performed.

Commissioning fieldwork is conducted in accordance with a project-specific test procedure. The procedure must include a clear description of the design specifications and information, controls sequence, manufacturer cut sheets and equipment performance specifications, installation instructions and O&M manuals in addition to the verification checklists and functional tests. This information will form the basis of the commissioning acceptance criteria unless it is clearly specified otherwise and is necessary for evaluating the results of the checks and tests. The commissioning procedure must incorporate all the details required for the DDC system/facility being commissioned. In addition, a successful commissioning process will require coordination with all commissioning team members.

In commissioning a direct digital control (DDC) system, the intent, typically, is to assure that the DDC system automatically controls the HVAC system, maintaining good indoor air quality and comfort, while minimizing energy use and the use of operator and/or building staff time. The primary goal is to verify that the DDC system has been installed and working as specified, while looking for opportunities to improve upon its intended operation as well.

Protocols for commissioning a DDC system include verification checks of the DDC interface with installed equipment, subsystems and systems and functional tests of the DDC system control functions. The DDC components important to the commissioning effort include central processing/monitoring hardware and software, communications/alarm function, user interface with the DDC system, control functions required for facility operation, local control panels and individual monitored points. The DDC performance parameters can vary widely depending upon the size and complexity of the facility/system being monitored and the level of control delegated to DDC. Some basic monitored parameters include time of day, start/stop control, temperature, proof of flow, voltage, amperage, heat/smoke, lighting levels and occupancy.

Verification checks address equipment nameplate data and documentation, the physical installation, electrical system, system controls, and test and balance of controlled systems. Each sequence and system should be 100% point-to-point tested to ensure system operation through DDC control. Following the completion of the verification checks, functional tests can commence. Functional test requirements should be refined, as required, using the information gathered while conducting the verification checks. This testing is intended to verify the DDC system's control over specific system functions. These checks and tests are not intended to replace the contractor's normal and accepted procedures for installing and pre-testing equipment or relieve the contractor of the standard checkout and start-up responsibilities, but to assure the owner that design intent has been met. Any equipment, condition, or software program found not to be in compliance with the acceptance criteria should be repaired or corrected and then retested until satisfactory results are obtained. Following on-site testing, the test results and documentation is compiled and a final commissioning report prepared.

This procedure was developed with the assistance of PG&E's Commissioning Test Protocol Library's Templates. It identifies steps that need to be taken to fully commission a new DDC system. The checks and tests provided are intended to serve as a guideline in the preparation of the project-specific verification checks and functional tests. Examples are based on a fictitious building in San Francisco, CA.

Following is an outline of what is included:

A. Initiating Issues
Project/Building description
Control system description
Design Intent / Level of control desired / What is acceptable performance?
Definition of roles and responsibilities
Development of project specific checks and tests
Prerequisites/Documentation requirements
B. Verification Checks


  1. Hardware set-up
    1. Network
    2. Sensors, actuators, valves and dampers


  2. Software & Programming
    1. Software installed
    2. Operator interface and graphics
    3. Scheduling
    4. Offline demonstration of control sequences and energy conservation applications: logic check
    5. Monitored points
    6. Trends (set-up and archive/data storage)
    7. Alarms (priority, routing, printer, call-out)
    8. Standard Reports


  3. Functional Tests
    1. General procedure
    2. Generic operational trend test protocol
    3. Generic SO test protocol
    4. Trends
    5. Remote dial-up
    6. Critical alarm call-out
    7. Access/Passwords
D. As-built Record Drawings
E. Training
F. Results and Recommendations for Final Acceptance
G. References Used to Develop this Protocol





Purpose
This procedure prescribes a uniform set of methods for conducting commissioning verification checks and functional tests of HVAC DDC Systems.


Scope


  1. This procedure includes the following:


    1. definitions and terminology



    2. a general description of method(s) provided



    3. required information and conditions for initiating a check or test



    4. recommendations for applying general protocols specific applications


    5. uniform method(s) including identification of test equipment and measurement points for performing such checks or tests



    6. identification of requirements for acceptance



    7. references and bibliography


  2. If necessary, include items that are not included or covered by this procedure.



DEFINITIONS
controlled device: a device (e.g., an actuator) that responds to a signal from a controller or adapter, which changes the condition of the controlled medium or the state of an attached device (e.g., a damper). The combination of the controlled device and its attached device may also be considered a controlled device.


controller: any microprocessor based control system component capable of executing control functions, interfacing with other controllers or third party controlled devices. Examples include:
  • Primary or global controllers
  • Secondary controllers including Remote Processing Devices (RPDs), Application Specific Controllers (ASCs), and Terminal Unit Controllers


control loop: a combination of interconnect components or functions intended to produce a desired condition in a control medium. A control loop typically consists of three main components: a sensor, a controller and a controlled device. These three components or functions interact to control a medium, supply air temperature for example. The sensor measures the data, the controller processes the data and orders the controlled device to cause an action.

direct digital control system (DDC): a networked system of microprocessor-based controllers with analog and digital input and output devices and control logic that is developed and managed by software. Analog-to-Digital (A/D) converters transform analog values such as volts or frequency into digital signals that a microprocessor can use. Analog sensor inputs (AI) can be resistance, voltage or current generators. Most systems distribute the software to remote controllers to minimize the need for continuous communication capability (stand-alone). If pneumatic actuation is required, it is enabled by electronic to pneumatic transducers. The operator workstation is primarily used to monitor control system status, store back-up copies of programs, enunciate and record alarms, initiate and store trends and develop reports. Complex strategies and functions to reduce energy use can be implemented at the lowest level in the system architecture. Other terms used instead of DDC include EMS (Energy Management System), BAS (Building Automation System), FMS (Facility Management System) and EMCS (Energy Management and Control System).


functional tests: those full range of tests that are conducted to verify that specific components, equipment, systems, and interfaces between systems conform to a given criteria. These tests are typically used to verify that a sequence of operation is correctly implemented or that a design intent criterion has been met. They typically are done after equipment is placed in full operation. Performance tests, which include efficiency, capacity, load, monitoring and M&V or savings protocols, are considered a subset of functional tests.

network (LAN/WAN): the media that connects multiple intelligent devices. LAN (local are network) implies a network over small geographic area. A building may have two LAN's, one for the building computer network and one for the DDC system. WAN (wide are network) implies data transfer through a router. The most basic task of the network is to connect the DDC controllers so that information can be shared between them.

user interface devices: operator workstation (desktop computer w/ necessary software to provide full access and operational capabilities to the entire DDC system); remote workstation, also known as a portable terminal (laptop computer w/ necessary software to provide full access and operational capabilities to the entire DDC system from a remote location); mobile terminal station, also known as a hand-held terminal (typically supplied and programmed by the vendor for specific set-up tasks); smart stats (thermostats that allow a multiple hierarchy of user entered offsets and adjustments); web browser (an internet based device with limited software that provides some level of access and operational capabilities).

verification checks: those full range of physical inspections and checks that are conducted to verify that specific components, equipment, systems, and interfaces between systems conform to a given criteria. These checks typically verify proper installation, start-up and initial contractor checkout, prior to equipment being functionally tested.



CLASSIFICATIONS
Checks and tests performed under this test method are classified as follows:


  1. Verification checks
    1. Documentation checks: specifications, submittals, TAB report, pre-commissioning report, as-built drawings, training implementation
    2. Hardware/software installation checks: verify nameplate data, verify installed characteristics, verify system is operational
    3. Software implementation checks: verify AI, AO, DI/DO I/O points, verify sensor calibrations; demonstrate offline setpoints, control sequence logic, graphics, alarm codes and standard reports


  2. Functional tests
    1. Software functionality tests
    2. Operational trend tests: observed range of control; can be used to verify many control sequences.
    3. Control sequence tests. Possible tests include: start/stop (on/off); schedule (scheduled start/stop, optimum start/stop [includes warm-up and cool-down], unoccupied setback [includes night purge], sweep); lead/lag (includes runtime and equipment failure); staging; reset (including setpoint change, control by flow and speed control); safeties; economizer; life safety interface; power failure




  1. Prerequisites/ REQUIREMENTS
    1. Information/Documentation. List any special requirements that must obtained or defined by the individual performing the test prior to conducting the test.




    1. Information. The recommended information to be defined prior to initiating verification and testing activities includes the following:
      1. Facility overview
      2. Overall scope of project including the design strategy and a description of equipment being controlled
      3. DDC system description
      4. Description of operating strategy and level of control desired
      5. Global definition of acceptable performance (from design intent)
      6. Describe DDC interface with/use of any existing controls
      7. Equipment covered by this procedure
      8. Scheduled fieldwork dates
      9. Template: Project Form.doc
      10. Example: Project-1.doc



    2. Required Documentation
      1. Approved copy of DDC specification
      2. Approved copy of controls drawings including sequences of operation, control loop diagrams, I/O points list, schematics and wiring diagrams
      3. DDC system and controlled equipment manufacturer's spec sheets, installation manuals and operation manuals
      4. Approved TAB Report
      5. Approved copy of the Pre-Commissioning Test Report (if required by controls contractor)
      6. Template: Prerequisite Documentation Form.doc

    1. Definition of roles and responsibilities. Understanding the role of each participant is vital to the success of the commissioning process. Any specific contractor requirements must be included in contract documents. Note that the method of implementation could change depending contractual relationships, the owner's organizational requirements and expertise available. The owner may also serve as the commissioning service provider. Cleary define the roles and responsibilities of the various parties involved in conducting the test.


  1. Recommendations. At a minimum the following level of detail is recommended:


Commissioning Service Provider (This may be the owner or their representative under contract): prepare application specific check and test forms; personally verifies and records necessary data; submits recorded observations, recommendations and data to the owner for review and approval.

Controls Contractor: certify that all pre-commissioning requirements have been met subject to required compensation/penalties for excessive commissioning failures; provide applications engineer an/or control technician to assist in resolving issues as they arise.
TAB Contractor: assist the controls contractor, as needed with required flow and pressure settings and minimum outdoor air damper settings to maintain required design ventilation.

Owner: provide specific acceptance criteria (hopefully this was included in the controls specification); allow O&M staff personnel to receive required training and observe key functional tests as they are conducted. This is especially true of critical sequences.

  1. Template: need to create form


  1. Example: roles and responsibilities-1.doc


  1. Initialization requirements. In order to have a productive and efficient implementation of the procedure it is recommend that a minimum level of preparedness be defined. This is typically done in contract documents.




  1. Recommendations. The following pre-commissioning/initialization requirements are recommended:
    1. Verify proper pneumatic pressures and conditions
    2. Verify proper electric voltage and amperages, and verify all circuits are free from grounds or faults
    3. Verify integrity/safety of all electrical [and pneumatic] connections
    4. Verify proper interface with fire alarm system


    5. Co-ordinate with TAB contractor to obtain control settings that are determined from balancing procedures
      1. Optimum VAV duct pressure setpoints
      2. VAV fan VFD minimum and maximum speed settings
      3. VAV Return fan volume tracking settings
      4. Minimum outside air damper settings for air handling units
      5. VAV box minimum and maximum volume settings
      6. Optimum differential pressure setpoints for variable speed pumping
      7. Variable volume pump VFD minimum and maximum speed settings
      8. Air-handler maximum design flow verified
    6. Test, calibrate, and set all digital and analog sensing and actuating devices
    7. Check and set zero and span adjustments for all actuating devices
    8. Check each digital control point
    9. Proper sequences have been installed and tested
    10. All control loops have been properly tuned

  2. Template: need to create form




  1. General Instructions
    1. Development of Project Specific Checks and Tests: Need text here.


    1. Field-Initiated Modifications to Prepared Project Specific Checks and Tests. Field conditions seem to always provide situations in which approved procedures have to be modified. The following list describes how best to document the change:
  • Describe the conditions that invalidate the approved testing procedure.
  • Identify the specific steps or tests in the approved procedures that are invalidated.
  • Describe the modified steps to the procedures.
  • Explain how these new steps address the unanticipated on-site conditions without altering the intent or the outcome of the testing.
  • Obtain the appropriate approvals, if necessary.
  • Proceed with the modified testing procedure.




  1. Sensor Calibration Verification Requirements: see #14 and #15
    Temperature: Use a multi-point verification check at various points in the operating range (including minimum, typical, and maximum) utilizing a calibrated thermometer and Dewar flask or a calibrated portable drywell (±0.5ºF) temperature probe calibrator and compare it to the I/O point data at a user interface to field-verify through-system measurement tolerance.
    Relative Humidity: Use a single point calibrator or portable environmental chamber that has been lab calibrated with a NIST traceable dew point monitor (±3%) and compare it to the I/O point data at a user interface to field-verify through-system measurement tolerance. Salt baths are not recommended outside of the laboratory. They do not transport well and their accuracy is greatly affected by the unstable environmental conditions usually found in the field.
    Fluid Flow: Use a portable ultrasonic flow meters to spot check flows and compare it to the I/O point data at a user interface to field-verify through-system measurement tolerance. One must be aware that UFM's are velocity dependent devices and are highly vulnerable to variations in flow profile and installation error. They should be considered 5% devices at best for pipe diameters 12 inches and under. UFM flow profile compensation assumes a fully developed flow profile at the calculated Reynolds number. Even at 10 diameters downstream of an elbow, significantly altered flow profile will occur. It is suggested that flow profile compensation be turned off and the acceptable deviation between the measuring flow meter and the UFM be restricted to 5% for applications with less than 10 pipe diameters of straight length pipe upstream of the UFM. If variable flow conditions exist, both flow and the flow profile will need to be evaluated at a range of conditions. See ASHRAE Standard 150-2000 Annex D for a detailed method.
    Air Flow: Verification of airflow measurement system calibration in the field is often more difficult than for liquid flow, because of large and complex ductwork. Field calibration checks can be performed under steady state conditions by using a calibrated pitot tube or propeller anemometer traverses in at least two planes field-verify through-system measurement tolerance. Where the field conditions vary under normal operation, airflows should be checked over a range of at least five flow rates.
    Pressure. The method for verifying pressure-sensing instrumentation calibration in the field depends on the required accuracy of the process measurement. For example, differential pressure and pressures used to determine flow rate typically require the highest accuracy; pressures used by operations for checking processes may require less accuracy. Use a multi-point verification check at various points in the operating range (including minimum, typical, and maximum) with a calibrated dead weight tester or an electronic pressure calibrator for ranges above atmosphere, or an accurate digital pressure gage for ranges below atmosphere and compare it to the I/O point data at the work station to field-verify through-system measurement tolerance.
    Static pressure. Gage pressure calibration checks can be performed with dead weight testers (inaccuracies are less than 0.05%) or electronic pressure calibrators (inaccuracies are about 0.1%). If the pressure sensor is set up to read absolute pressure, an atmospheric pressure will be needed, in order to add ambient pressure to the applied reading. Check calibration at various points in the operating range (including minimum, typical, and maximum) and compare it to the I/O point data at a user interface to field-verify through-system measurement tolerance. Vacuum range pressures can be attained with a vacuum pump, with an atmospheric pressure gage as the reference. Draw a vacuum on the transmitter. Use a 0 to 1000 micron vacuum gage to verify that 0 psia has been reached, if it is one of the calibration-check points. Zero the reference gage if necessary. Gradually bleed air into the system. At each point, stop the bleed and record the data.
    Differential pressure. Use a dead weight tester or electronic calibrator or a magnehelic gauge with a pressure bulb to their high-pressure side to apply a known pressure at various points in the operating range (including minimum, typical, and maximum) and compare it to the I/O point data at a user interface to field-verify through-system measurement tolerance.
    Very Low Differential Pressure. Use a very sensitive manometer, such as a micromanometer or digital manometer or narrow range to spot check pressures at various points in the operating range (including minimum, typical, and maximum) and compare it to the I/O point data at a user interface to field-verify through-system measurement tolerance. The manometer must be zeroed. A hand pump/bleed valve setup can be used to apply the small pressures required to the high sides. The manometer is adjusted and the instrument readings are compared at the high and low point. The temperature of the manometer fluid should be used to adjust its readings to the standard temperature conditions of the transmitter.


  1. Test Equipment. The type and capability of measurement and data acquisition instrumentation required will depend upon the sophistication of the control system, types of sensors used and the monitoring strategy employed. A general equipment list could include:
  • A digital multi-meter
  • Portable power meters w/ or w/o data logger
  • A calibrated averaging thermometer
  • A calibrated drywell temperature calibrator and ice bath
  • A calibrated averaging relative humidity meter
  • A calibrated magnehelic static pressure gauge or deadweight tester
  • A calibrated magnehelic differential pressure gauge
  • A calibrated pitot tube or hot-wire anemometer
  • A calibrated flow hood
  • An ultrasonic flow-meter
  • 1, 4 or 20 channel portable battery powered data loggers
  • Miscellaneous hand tools


  1. General Notes. Provide general notes for the user of protocols provided; define what this procedure does and does not cover; list any general prerequisites for starting work, requirements for competing the work, general acceptance criteria, general disclaimers or safety issues.


  1. Template: General Instructions Form.doc


Methods

Verification Checks
Controls Hardware Installation and Set-up: For each piece of equipment identified document pertinent equipment descriptors, including manufacturer, model number, serial number, equipment type, electrical, capacity and efficiency ratings and any other information that may indicated lack of usability or performance. Provide a table for user to enter information available from design specifications, submittals and installed equipment nameplates. Include any special instructions and provide specific requirements for acceptance. Verify that the correct hardware has been installed as specified and works properly. Have the specified equipment been included?
Network, Controllers, Conduit and Wiring Checks
Nameplate data - Correct equipment
Installed characteristics - Installed as specified
Power-up / General run check

Sensors and Controlled Devices
I/O Point Set-up Checks: I/O points should be defined in a meaningful and complete manner including English-language descriptors, appropriate engineering units, and actual control function; focus on critical points – if they are not correct look elsewhere. AI/AO and DI/DO characteristics include:
Analog Inputs
  1. Name, designation and address
  2. Scanning frequency or COV limit
  3. Engineering units and scaling factors
  4. High and low alarm values and alarm differentials
  5. High and low value reporting limits (reasonableness values)
  6. Default value to be used when the actual value is not reporting
  7. Accuracy Tolerance
  8. Vendor method of calibration
Analog Outputs
  1. Output range
  2. Controlled device
Digital (Binary) Inputs
  1. Message and alarm reporting as specified
  2. Reporting of each change of state, and memory storage of the time of the last change of state
  3. Totalization of the on time (for all motorized equipment status points), and accumulated number of off-to-on totalizations
Digital (Binary) Outputs
  1. Minimum on/off times
  2. Status associations with DI and failure alarming (as applicable)
  3. Default value to be used when the normal controlling value is not reporting
Sensors, Actuators, Valves and Dampers Checks
Nameplate Data: Verify that the correct hardware been installed?
Installed Characteristics: Verify that sensors and controlled devices are properly connected to the correct controller. Verify that the hardware has been installed as specified and in the proper location? Are sensors installed in such a way as to measure the media properly; is adequate attention paid to providing the proper conditions such as shielding from the suns radiation, flow straightening, minimum straight lengths of pipe or insertion depth or insulation? Pay particular attention to global sensors such as outdoor air temperature and chilled water supply and return temperature.
Operational Checks and Through System Response: Very that sensor calibration and controlled device range of action and control response is correct. Does the equipment move freely over the required range? The method used for verifying sensor calibration and controlled device function will be dependent upon I/O point importance, acceptance criteria and/or tolerance specified.

Controls Software Installation & Programming: For each user interface device and controller, verify that the correct software has been installed as specified and works properly. Have the specified capabilities and functionality been provided? Does the system perform the tasks you expected?
Software Installation and Installed Capabilities
System software: Determine where the current version of the program is kept; is there a back up and where is it kept? When revisions are required, how are updates managed?
Operator graphical interface software: Verify that the required software and features are installed in the proper user interface workstation and are functional.
Example requirements and features include:
Operating system
Multi-tasking capability
Graphical importing capabilities
Screen penetration/Graphic page linking
Dynamic update
Point override features
Dynamic Symbol updating
Graphics package
Symbol library
Standard pictures
For dial-up/remote buildings, graphics may need to reside on a remote workstation.
Operator interface functionality: Verify that the required operator graphical interface functionality is installed in the proper workstation and are functional.
Examples include:
1. Operator interface allows operator to monitor and supervise control of all points.
2. Operator interface allows operator to add new points and edit the system database.
3. Operator interface allows operator to enter programmed start/stop time schedules.
4. Operator interface allows operator to view alarms and messages.
5. Operator interface allows operator to change control setpoint, timing parameters, and loop-tuning constants in all control units.

6. Operator interface allows operator to modify existing control programs in all control units.

7. Operator interface allows operator to upload/download programs, databases, etc. as specified.
Primary control unit software: Verify that the required software/features are installed in each primary controller and are functional.
Examples include:
1. Real time operating software
2. Real time clock/calendar and network time synchronization
3. Primary control unit diagnostic software
4. LAN communication software
5. Direct digital control software
6. Alarm processing and buffer software
7. Data trending, reporting, and buffering software
8. I/O (physical and virtual) database
9. Remote communication software unless it is resident in LAN Interface- Device on the primary LAN
Secondary control unit software: Verify that the required software/features are installed in the proper secondary controller and are functional.
Examples include:
1. Real time operating system software
2. Secondary control unit diagnostic software
3. LAN communication software
4. Control software applicable to the unit it serves that will support a single mode of operation
5. I/O (physical and virtual) database to support one mode of operation
Energy management applications: Verify that the required software/features are installed in the proper user interface and/or controller and functional.
Examples include:
1. HVAC optimal start/stop
2. Unoccupied temperature setback/up
3. Temperature resets (supply air temperature reset, heating water temperature reset, chilled water temperature reset, condenser water temperature reset)
4. Electrical demand limiting
5. Lighting sweep
Commissioning software: Verify that the required software/features are installed in the proper user interface and/or controller and are functional.
Programming and Set-up
Dynamic color graphic screen set-up: Verify that the required graphic screens and features have been set-up on the proper user interface and are functional.
Examples include:
1. Floor Plans with links to Mechanical Room and terminal equipment
2. Mechanical Room floor pans with links to HVAC equipment
3. Key plans with links to floor plan
4. Site plans with links to Buildings
5. Equipment screens linked to related equipment
6. Alarms showing on screens
7. Adjustable setpoints
8. Tabular summary pages
Scheduling Set-up: Verify that the required schedules have been programmed. List as necessary.
Monitored Points Set-up: Verify that the required monitoring points been programmed including psuedo and calculated points required for performance monitoring and preventative maintenance. Are they viewable in the appropriate graphic screen? Does it update at the proper time interval?
Trends Set-up: For both commissioning related trends as well as for long term monitoring trends determine if the data is being sampled at the proper time intervals required and if, how, and where the data is being archived for later analysis. Determine if the appropriate functionality has been provided.
Examples include:
1. Tabular and graphical formats
2. Any point, hardware or software (virtual)
3. Simultaneous display of values
4. User adjustable ranges and scaling
5. High resolution: capable sampling on PID control loops
Alarms Set-up
Prioritization (critical; informational)
Routing (enunciation, printer, call-out)
Auto Dial
Alarm Acknowledgment
Graphic Links
Alarm Acknowledgment
Standard Reports Set-up
Offline demonstration of control sequences: Review logic programming; evaluate whether or not the specified requirements have been executed; test offline if system functionality allows. Critical control logic to review include:
Motor start/stop
Ventilation and air-side economizer
VAV terminal unit
Chiller sequencing
Interface with life-safety
Energy management applications.
Other specialized control logic such as that required for cool storage system (specify system or equipment)

Non-Compliance and Corrections: Document any item that does not comply with design intent or specification requirements. Include criteria used to determine non-compliance.

Functional Tests
General Procedure. Due to the vast differences that exist between DDC systems, the systems that can be effectively controlled, the types of controls and sensors available and the interface potentials with new and existing installations, a project-specific set of functional tests must guide the testing. The following list of tests is meant to act as a guide. The assistance of the building operator or controls engineer is recommended when programming must be altered to force a condition to be tested. All inputs, outputs and global variables that have been forced for purposes of performing the following tests must be returned to an as-programmed state.
Through the user interface conduct the following series of tests:
Raise/lower space temperatures in software to verify if the system responds appropriately.
Raise/lower the mixed-air temperature and verify damper positions.
Raise/lower static pressure setpoints and verify variable speed drive or vortex control.
Verify that time-of-day start-up and shut-down control sequence initiates the proper system response.
Trend all required points at minute time intervals to verify trending capabilities.
Verify if all alarm conditions are monitored.
Initiate a high priority, off-hours call out alarm and verify that the remote dial-out procedure has been carried out correctly.
Print out all required reports.
Verify that the interface with system safeties allow operation of dampers, etc., if safety conditions are met.
Conduct an emergency start-up after power failure test. Verify that all systems return to automatic control.
Verify DDC system maintains required outside air requirements under low airflow conditions.
Disconnect communication cable to the DDC system and verify if the DDC panel can control the respective system (stand-alone control).
Disconnect a DDC space-temperature sensor and verify control sequence default.
Verify the time duration of battery backup.
Perform a remote dial-up using the remote workstation. Verify that all specified capabilities are enabled.


Testing Sequences of Operation.
Operational Trend Tests. Use of operational trends as much as possible to test sequences on line is encouraged. Operational trend tests typically rely upon normal system operation to provide the data necessary to evaluate system function. They can be used to evaluate the following sequences: scheduled occupied and unoccupied modes to verify system stability and equipment start/stop; terminal box operation; VFD controlled equipment cycling; control loop stability; and energy efficiency applications such as night setback, economizer mode, lighting sweep, and various reset schedules. It should be understood that before such tests can be performed that proper DDC equipment installation and I/O programming and sensor calibration must be verified. If sufficient sensors are provided and psuedo or calculated performance-monitoring points are programmed, trends can even be used to evaluate system performance. If manipulation of the control systems is used to provide needed operating conditions, care must be taken to not manipulate equipment that is interlocked with equipment under test. Direct manipulation of the sequence under test will not yield a valid test.
An operational trend test protocol for each sequence to be tested is necessary to define the method of identifying acceptable performance. The test protocol should include the following information:
  • Test name and description of control sequence to be tested
  • Prerequisites for initiating test such as verification of sensor calibration
  • Conditions under which the test is to be performed such as season of year or level of occupancy
  • Test duration
  • Data to be gathered; list the specific points to be trended and if multiple trend reports are required, which points need to be grouped together. If new psuedo or calculated points are required, define the logic or calculation method.
  • Data sampling, reporting and archival intervals; Are instantaneous values sufficient or are interval averages required?
  • Method of data acquisition and data storage
  • Specific measurable or quantifiable criteria for demonstrating acceptable performance
  • Data analysis and plotting requirements
  • Results reporting requirements
Include any notes of caution or special requirements that must be obtained or defined by the individual performing the test.
Operational trend test template: DDC Operational Trend Test Form.doc


  1. Example trend tests.
    1. Schedule Start/Stop and Unoccupied Setback – see # 9, revise to meet requirements
    2. Chilled Water Temperature Reset – see # 9, revise to meet requirements
    3. Hot Deck Control – see #6: pg. 6-41, revise as required.


  1. Sequence of Operation Test Protocol. When trend data alone is not sufficient to determine compliance with defined acceptance criteria, it is necessary to develop a more comprehensive protocol. This especially true with critical sequences that involve staging of equipment and systems, interlocks with other systems, stand alone operation of critical equipment and where portable instruments are required to gather the necessary data.
    For each sequence to be tested it is necessary to define the method of identifying acceptable performance. The test protocol should include the following information:
  • Description of control sequences in as much detail as necessary
  • Test name and sequence to be tested
  • Prerequisites for initiating test such as verification of sensor calibration of all sensors used to test the sequence
  • Method of test including means of initiating and stepping through the sequence
  • Conditions under which the test is to be performed such as season of year or level of occupancy
  • Test duration
  • Data to be gathered including method and location of measurements required
  • Instrumentation requirements including measurement tolerance, method of data acquisition, sampling, reporting and archival intervals and data storage; Are instantaneous values sufficient or are interval averages required?
  • Specific measurable or quantifiable criteria for demonstrating acceptable performance
  • Data analysis and plotting requirements
  • Results reporting requirements
Include any notes of caution to the user and list any special requirements that must be obtained or defined by the individual performing the test.
Generic SO test template: DDC Sequence Test Form.doc
Example sequence tests
Start-up after power failure???

Trends: Trend all required points at minute time intervals to verify trending capabilities. At the completion of verification and functional testing, all trend data, which were acquired as part of these activities, should archived in long-term storage and removed from controller memory. Trends used for testing should be made inactive unless they are also required for long term monitoring.

Remote dial-up: Perform a remote dial-up using the remote workstation. Verify that all specified capabilities are enabled.

Critical alarm call-out: Using the operator work station, initiate a high priority, off-hours call out alarm and verify that the remote dial-out procedure has been carried out correctly.

Access/Passwords: At the conclusion of testing, verify that all specified individuals are provided with their specified level of access and an appropriate password.

As-built Records
Obtaining complete and accurate as-built records and drawings is paramount in maintaining the viability and persistence of benefits for installing a DDC system. As-built records to be obtained include the following:


  1. O&M Materials



    1. User guides



    2. Programming manuals



    3. Maintenance instructions



    4. Spare parts list



  2. Record Documents



    1. Updated logic diagrams, installation, wiring drawings reflecting installed conditions



    2. Electronic copies of graphics software

    Certificates

    Conformance
    Warranty




  1. Training

    Recommendations. Training of facility staff is critical to obtaining the desired benefit for installing or upgrading a DDC system. Each operator and facility supervisor needs to comfortably know his or her way around the operator workstation. They will need to be able to identify, add and delete I/O points, change setpoints, manage alarms and reports, create and plot trends, and even revise sequences if needed. It is best if all on-site training is video taped if possible.
    At a minimum, the on-site training should include an overview of the DDC system installation provided, explanation of all DDC components and functions, explanation of control strategies, instruction on operator workstation access and interface syntax, data back-up and archival procedures, explanation of the set-up and generation of all DDC reports and graphics, description of alarm conditions and acknowledgment procedures, and instruction on system operation through the remote workstation and mobile terminal stations. It can also include on-site training detailing preventive maintenance of system hardware and calibration of sensors, transducers, and network communications.


Results and RECOMMENDATIONS FOR Final Acceptance To be completed



References Used to Develop this Procedure



ASHRAE Guideline 11P: Method of Test for Building HVAC Control Systems, Working Draft. January 2000. ASHRAE, Atlanta, GA. (#11)

ASHRAE Guideline 14P: Measurement of Energy and Demand Savings, Annex A2 Calibration Techniques and Annex E7 Generic Test Protocol. 2001 Submittal Draft. ASHRAE, Atlanta, GA. (#2)

ASHRAE Research Project 1054-RP Cool Storage Operating and Control Strategies: Presentation of a Framework. Chad Dorgan, Charles Dorgan, Zachary Obert. June 1999. ASHRAE, Atlanta, GA. (#18)

Building Commissioning Assistance Handbook, http://www.ci.seattle.wa.us/light/conserve/business/bdgcoma/cv6_bcam.htm, bca3.rtf. Bill Durland. Seattle City Light, Seattle, Washington. (#9)

DDC Online (http://www.energy.iastate.edu/ddc_online/intro/index.htm), Iowa Energy Center, Iowa. (#17)

Engineered Systems Training Series Paper: Back to Basics. Rebecca Ellis and Howard McKew. 1996 to present. Sebesta Blomberg & Associates, Inc., Minneapolis, MN. (#11)

HVAC Commissioning Guideline. Ross Sherrill. 1995. Sherrill Engineering, South San Francisco, CA. (#16)

Multnomah County Protocols, Energy Management System, Emsml11.pro. Mike Kaplan, Amy Joslin. Multnomah County, Oregon. (#8)

NEBB Procedural Standards for Building Systems Commissioning. Rev 2.0 November 1999. National Environmental Balancing Bureau, Gaithersburg, Maryland. (#7)

HVAC Functional Inspection and Testing Guide. James Y. Kao. 1992. (U. S.) National Institute of Standards and Technology, Gaithersburg, Maryland. (#12)

PG&E CES Commissioning Guideline, 6.2 Test Plan for Energy Management Systems. Bill Malek, Bryan Caluwe. 1995 – Internal document. Pacific Gas & Electric Company, San Francisco, CA. (#6)

PG&E Commissioning Test Protocol Library Release 1.1, templates.doc. 2001. Pacific Gas & Electric Company, San Ramon, CA. (#1)

PG&E Commissioning Test Protocol Library Questionnaire, #16:list of items to be included in a standardized protocol template. 2001. Pacific Gas & Electric Company, San Ramon, CA. (#3)

University of Washington, Facility Design Information Manual - Environmental Control Systems, http://depts.washington.edu/fsesweb/fdi/index.html. Rev 04 September 1995. Facilities Services, University of Washington, Seattle, Washington. (#10)

University of Wisconsin, Madison, DDC for HVAC Controls, class handouts. Jay Santos and Bob Shultz. October 2000. Madison, Wisconsin. (#5)

US Army Standard HVAC Control Systems Commissioning and Quality Verification User Guide. Glen Chamberlin and David Schwenk. September 1994. U.S. Army Engineering and Housing Support Center, Fort Belvoir, VA. (#13)

USDOE/FEMP/PECI Version 2.05 Commissioning Tests, Building Automation System Prefunctional Checklist, CONTROLS.PC5; Calibration and Leak-by Test Procedures, CALIBDIR.PC1, http://www.peci.org/cx/guides.html. 1998. PECI, Portland, Oregon. (#14)

 

Tidak ada komentar:

Posting Komentar