View Current

Information Security Operations Management Procedure

This is not a current document. It has been repealed and is no longer in force.

Section 1 - Procedure

Audience

(1) All University staff, vendors, students, volunteers, and members of advisory and governing bodies, in all campuses and locations of the University and at all times while engaged in University business or otherwise representing the University.

Executive Summary

(2) The University of Newcastle is committed to and is responsible for ensuring the confidentiality, integrity, and availability of the data and information stored on its systems.

(3) All users interacting with information assets have a responsibility to ensure the security of those assets.

(4) The University must have controls in place to ensure the smooth operation of the University’s ICT resources. Users must be trained, equipped and periodically reminded to use information and associated infrastructure securely.

Top of Page

Section 2 - Operational Procedures and Responsibilities

Objective:

(5) To ensure correct and secure operations of information systems

Documented Operating Procedures

(6) Operating procedures and responsibilities for information systems must be authorised, documented, and maintained.

(7) Information owners and System Owners must ensure that Standard Operating Procedures (SOP) and standards are:

  1. documented;
  2. approved by the appropriate authority;
  3. consistent with University policies;
  4. reviewed and updated periodically;
  5. reviewed and updated when there are changes to equipment/systems or changes in business services and the supporting information systems operations; and
  6. reviewed and updated following a related security incident investigation.

(8) The documentation must contain detailed instructions regarding:

  1. information processing and handling
  2. system restart and recovery procedures;
  3. backup and recovery including on-site and off-site storage;
  4. exceptions handling, including a log of exceptions;
  5. output and media handling, including secure disposal or destruction;
  6. management of audit and system log information;
  7. change management including scheduled maintenance and interdependencies;
  8. computer room management and safety;
  9. Information Incident Management Process;
  10. Disaster Recovery;
  11. Business Continuity Plan; and
  12. Contact information for operations, technical, emergency and business personnel.

Change Management

(9) Changes to business processes and information systems that effect information security must be controlled.

(10) All changes to the University's ICT services and system environment, including provisioning and de-provisioning of assets, promotion of code, configuration changes and changes to Standard Operating Procedures must be authorised by the University IT Change Advisory Board (CAB).

(11) The change management process must follow the guidelines, approvals and template provided as per the University Transition Process.

(12) Changes must be controlled by:

  1. identifying and recording significant changes;
  2. assessing the potential impact, including that on security, of the changes;
  3. obtaining approval of changes from those responsible for the information system;
  4. planning and testing changes including the documentation of rollback procedures;
  5. communicating change details to relevant personnel, Users and stakeholders; and
  6. evaluating that planned change were implemented as intended. 

(13) Information owners and System Owners must plan for changes by:

  1. assessing the potential impact of the proposed change on security by conducting a security review and a Threat and Risk assessment;
  2. identifying the impact on agreements with business partners and external parties including information sharing agreements, licensing and provision of services;
  3. preparing change implementation plans that include testing and contingency plans in the event of problems;
  4. obtaining approvals from affected Information owners; and
  5. training technical of operational staff a necessary;

(14) Information owners and System Owners must implement changes by:

  1. notifying effected internal parties, business partners, and external parties.
  2. following the documented implementation plans;
  3. training users if necessary;
  4. documenting the process throughout the testing and implementation phases; and
  5. confirming the changes have been performed and no unintended changes took place.

Capacity Management

(15) The use of information system resources must be monitored and optimised with projections made of future capacity requirements.

(16) Information owners and System Owners are responsible for implementing capacity management processes by:

  1. documenting capacity requirements and capacity planning processes;
  2. including capacity requirements in service agreements; and
  3. monitoring and optimising information systems to detect impending capacity limit.

(17) Information owners and System Owners must project future capacity requirements based on:

  1. new business and information system requirements;
  2. statistical or historical capacity requirements; and
  3. current and expected trends in information processing capabilities (e.g. introduction of more efficient hardware or software).

(18) Information owners and System Owners must use trend information from the capacity management process, to identify and remediate potential bottlenecks that present a threat to system security or services.

Separation of Development, Testing and Production Environments

(19) Development, testing and production environments must be separated to reduce the risk of unauthorised access or changes to the production environment.

(20) Information owners and System Owners must:

  1. separate production environments from test and development environments by using different servers, networks and where possible different domains; 
  2. ensure that production servers do not host test or development services or applications;
  3. prevent the use of test and development identities as credentials for production systems;
  4. store source code in a secure location away from the production environment and restrict access to specified personnel;
  5. prevent access to compilers, editors and other tools from production systems;
  6. use approved change management processes for promoting software from development / test to production;
  7. prohibit the use of production data in development, test or training systems; and
  8. prohibit the use of sensitive information in development, test or training systems in accordance withe the System Acquisition, Development and Maintenance Procedure.
Top of Page

Section 3 - Protection from Malware

Objective

(21) To ensure that information systems are protected against malware.

Controls Against Malicious Code

(22) Detection, prevention and recovery controls – supported by user awareness procedures – must be implemented to protect against malware.

(23) Information owners and System Owners must protect University information systems from malicious code by:

  1. installing, updating and using software designed to scan, detect, isolate and delete malicious code;
  2. preventing unauthorised Users from disabling installed security controls;
  3. prohibiting the use of unauthorised software;
  4. checking files, email attachments and file downloads for malicious code before use;
  5. maintaining business continuity plans to recover from malicious code incidents;
  6. maintain a critical incident management plan to identify and respond to malicious code incidents;
  7. maintaining a register of specific malicious code countermeasures (e.g. blocked websites, blocked file extensions, blocked network ports) including a description, rationale, approval authority and the date applied; and
  8. developing user awareness programs for malicious code countermeasures.

(24) University IT Security staff are responsible for communicating technical advice and providing information and awareness activities regarding malicious code.

Top of Page

Section 4 - Backup

Objective

(25) To protect against loss of data

Information Backup

(26) Backup copies of information, software and system images must be made, secured, and be available for recovery.

(27) Information owners and System Owners must define and document backup and recovery processes, that consider the confidentiality, integrity and availability requirements of information and information systems.

(28) Backup and recovery processes must comply with:

  1. University business continuity plans (if applicable);
  2. policy, legislative, regulatory and other obligations; and
  3. records management requirements (refer Records and Information Management Policy).

(29) The documentation for backup and recovery must include:

  1. types of information to be backed up;
  2. schedules for the backup of information and information systems;
  3. backup media management;
  4. methods for performing, validating and labelling backups; and
  5. methods for validating the recovery of information and information systems.

(30) Backup media and facilities must be appropriately secure based on a security review or Risk assessment. Controls to be applied include:

  1. use approved encryption;
  2. physical security;
  3. access controls
  4. methods of transit to and from off-site locations;
  5. appropriate environmental conditions while in storage; and
  6. off-site locations must be at a sufficient distance to escape damage from an event at the main site.
Top of Page

Section 5 - Log Management

Objective

(31) To log events and monitor compliance.

Event Logging

(32) Event logs recording user activities, exceptions, faults and information security events must be produced, kept and regularly reviewed.

(33) Information owners must ensure that event logs are used to record user and system activities, exceptions and events (security and operational). The degree of detail to be logged must be based on the value and sensitivity of the information and the criticality of the system. The resources required to analyse the logs must also be considered. Where applicable, event logs must include:

  1. user ID;
  2. system activities;
  3. dates, times and details of key events (e.g. logon, logoff);
  4. device identity and location;
  5. logon method;
  6. records of successful and unsuccessful system access attempts;
  7. records successful and unsuccessful data and other resource access attempts;
  8. changes to system configuration;
  9. use of elevated privileges;
  10. use of system utilities and applications;
  11. network addresses and protocols;
  12. alarms raised by the access control system;
  13. activation and de-activation of protection systems (E.g. anti-virus, intrusion detection); and
  14. records of transactions executed by users in applications.

(34) Event logs may contain sensitive information and therefore must be safeguarded in accordance with the requirements of the section on the Protection of Log Information.

(35) System administrators must not have the ability to modify, erase or de-activate logs of their own activities.

(36) If event logging is disabled the decision must be documented. Include the name and position of the approver, date and rationale for de-activating the log.

(37) Event logs may be configured to alert someone if certain events or signatures are detected. Information owners and System Owners must establish and document alarm response procedures to ensure they are responded to immediateley and consistently. Normally, response to an alarm will include:

  1. identification of the event;
  2. isolation of the event and effected assets;
  3. identification and isolation of the source;
  4. corrective action;
  5. forensic analysis;
  6. action to prevent recurrence; and
  7. securing of event logs as evidence.

Protection of Log Information

(38) Information system logging facilities and log information must be protected against tampering and unauthorised access.

(39) Information owners must implement controls to protect logging facilities and log files from unauthorised modification, access or destruction. Controls must include:

  1. physical security safeguards;
  2. permission for administrators and operators to erase or de-activate logs;
  3. multifactor authentication for access to highly-restricted records;
  4. backup of audit logs to off-site facilities;
  5. automatic archiving of logs to remain within storage capacity; and
  6. scheduling the audit logs as part of the records management process.

(40) Event logs must be retained in accordance with the records retention schedule for the information system.

(41) System logs for University critical IT infrastructure (P1 list) must be retained for at least 30 days online and archived for 90 days.

(42) Datacentre physical access logs must be made available for at least 90 days and CCTV records must be retained for at least 30 days.

(43) Logs must be retained indefinitely if an investigation has commenced or it is known that evidence may be obtained from them.

Administrator and Operator Logs

(44) Activities of privileged users must be logged and the log subject to regular independent review.

(45) The activities of system administrators, operators and other privileged user must be logged including:

  1. the time an event (e.g. success or failure) occurred;
  2. event details including files access, modified or deleted, errors and corrective action taken;
  3. the account and the identity of the privileged user involved; and
  4. the systems processes involved.

(46) Logs of the activities of privileged users must be checked by the Information owner or delegate. Checks must be conducted regularly and randomly. The frequency must be determined by the value and sensitivity of the information and criticality of the system. Following verification of the logs they must be archived in accordance with the applicable records retention schedule.

Clock Synchronisation

(47) Computer clocks must be synchronised for accurate recording.

(48) System administrators must synchronise information system clocks to the local router gateway or a University approved host.

(49) System administrators must confirm system clock synchronisation following power outages and as part of incident analysis and event log review.

Top of Page

Section 6 - Control of Operational Software

Objective

(50) To ensure the integrity of production systems.

Installation of Software on Production Systems

(51) The installation of software on production information systems must be controlled.

(52) To minimise the risk of damage to production systems Information owners must implement the following procedures when installing software;

  1. updates of production systems must be planned, approved, assessed for impacts, tested and logged;
  2. a Change and Release Coordinator must be appointed to coordinate the install and update of software, applications and program libraries;
  3. operations personnel and end users must be notified of the changes, potential impacts and, if required, given additional training;
  4. production systems must not contain development code or compilers;
  5. user acceptance testing must be extensively and successfully conducted on a separate system prior to production implementation;
  6. a rollback strategy must be in place and previous versions of application software retained;
  7. old software versions must be archived with configuration details and system documentation; and
  8. updates to program libraries must be logged.
Top of Page

Section 7 - Vulnerability Management

Objective

(53) To prevent exploitation of technical vulnerabilities.

Management of Technical Vulnerabilities

(54) Regular assessments must be conducted to evaluate information system vulnerabilities and the management of associated risk.

(55) To support technical vulnerability management, Information owners and System Owners must maintain an inventory or information assets in accordance with the Information Security Asset Management Procedure. Specific information must be recorded including:

  1. the software vendor;
  2. version numbers;
  3. current state of deployment; and
  4. the person(s) responsible for the system.

(56) Vulnerabilities which impact University information systems must be addressed in a timely manner to mitigate or minimise the impact on University operations. The IT Security Team shall ensure that vulnerability assessments (VA) are conducted for the University's ICT services and systems on a regular basis.

(57) Vulnerability remediation efforts, including patch implementations, shall be coordinated and processed according to the University's Patch Management Procedure and University Risk Management Framework.

(58) All internal and external University ICT systems and resources are covered in this procedure:

  1. Internal Vulnerability Assessments
    1. Servers used for internal hosting and supporting infrastructure
    2. Servers which will be accessed through reverse proxy
    3. Research specific servers and applications
    4. Research devices and systems
    5. Desktops and workstations
  2. External Vulnerability Assessments
    1. Perimeter network devices exposed to internet
    2.  All external facing servers and services
    3. Network appliances, streaming devices and essential IP assets that are internet facing.
    4. Public facing research applications and devices
    5. Cloud based services.

Vulnerability Management Cycle

Asset Discovery

(59) Asset Discovery scan will be executed on a monthly basis or quarterly on the segments to determine the live assets connected to the network.

(60) Network team wll share the IP segments of all assets within the University including Datacentres and other Virtual LAN’s with the IT Security Team.

(61) IT Security Team will perform an asset discovery scan on the segments.

(62) Any assets added or removed from the segment will be detected in the asset discovery scan.

(63) IT Security Team will share with Network team the addition / removal of servers / devices for reconfirmation based on the discovery scan.

(64) Final list of IP / IP Segments will be scanned for Vulnerabilities.

Scan – Remediate – Rescan

(65) IT Security Team shall perform Vulnerability Analysis Scan on all University Critical Infrastructure Servers on a monthly basis and non-critical assets on at least a quarterly basis.

(66) IT Security Team will perform a Risk assessment to map the risk, threat, likelihood and impacting rating for the vulnerabilities noted.

(67) The University Risk Management Framework shall be followed to perform the risk assessment.

(68) IT Security Team shall inform the System Owners regarding the results of the scans and share the vulnerability reports with the Responsible Administrators for each system.

(69) All vulnerabilities identified in the VA Scan shall be remediated by according to the Remediation Timeline and Risk Acceptance (below).

(70) The System Owners shall inform IT Security Team regarding the completion of vulnerability remediation.

(71) Vulnerabilities that cannot be actioned within the defined timeframe will need an exception approved.

Ad-Hoc Scans

(72) Ad-hoc scans include scans on any new infrastructure devices/servers/services prior to production deployment as per the following process:

  1. New service owners shall complete a Service Desk requested ticket and submit to the IT Security for actioning.
  2. The IT Security Team shall perform Vulnerability Analysis Scan of specific systems (including servers) as per the environment and technology used for the system.
  3. VA report shall be submitted to Business owner and respective system owner or team.
  4. IT Security Team lead will validate with respective System Owners on closure of all the vulnerabilities and then perform a re-scan.
  5. Vulnerabilities that cannot be actioned within the defined timeframe will need an exception approved, with risk acceptance and compensating controls implemented and documented.
  6. Assets / services / devices can be released to production only after the final sign off by IT Security Team.

Classification of Vulnerabilities

(73) Vulnerabilities are classified based on their impact in a given environment, to data / information or to the University's reputation.

Rating
Red Hat, Microsoft and Adobe Rating (see below)
Typical CVSS Score
Description
Critical Critical 10 A vulnerability whose exploitation could allow code execution or complete system compromise without user interaction. These scenarios include self-propagating malware or unavoidable common use scenarios where code execution occurs without warnings or prompts. This could include browsing to a web page or opening an email , or no action at all.
High Important 7.0 – 9.9 A vulnerability whose exploitation could result in compromise of the confidentiality, integrity or availability of user data, or of the integrity or availability of processing resources. This includes common use scenarios where a system is compromised with warnings or prompts, regardless of their provenance, quality or usability. Sequences of user actions that do not generate prompts or warnings are also covered.
Medium  Moderate 4.0 – 6.9 Impact of the vulnerability is mitigated to a significant degree by factors such as authentication requirements or applicability only to non-default configurations. The vulnerability is normally difficult to exploit.
Low Low <4.0 This classification applies to all other issues that have a security impact. These are the types of vulnerabilities that are believed to require unlikely circumstances to be able to be exploited, or where a successful exploit would give minimal consequences.

Remediation Timeline and Risk Acceptance

(74) All vulnerabilities identified in a VA Scan shall be addressed within the timeline described below. If any particular vulnerability cannot be remediated within this timeframe, the risk of data loss/attack on the device should be formally documented and accepted by the respective groups in below table. Remediation time and risk acceptance for the identified vulnerabilities shall be as follows:

Vulnerability Level Remediation Timelines  
External Facing Devices Internal Devices Risk Acceptance
Critical 1 week 1 week CIO or Risk Management Office
High 2 weeks 2 weeks CIO
Medium 3 weeks Next maintenance window Information owner
Low  Next Maintenance window Next maintenance window Information owner

Third Party Scans

(75) A third party must be engaged annually to perform vulnerability assessment and penetration testing covering all internet facing University ICT services and systems and critical internal non-internet facing ICT services and systems.

Vulnerability Management Roles and Responsibilities

(76) IT Security Team

  1. Perform asset discovery and performing Vulnerability Management Process.
  2. Approve the Vulnerability Assessment Schedule
  3. Oversee vulnerability remediation.
  4. Targeting vulnerability program maturity through metrics development.
  5. Monitor security sources for vulnerability announcements and emerging threats that correspond to the system inventory.

(77) System Owners

  1. Responsible for implementing remediating actions defined as a result of detected vulnerabilities.
  2. Testing and evaluating options to mitigating or minimise the impact of vulnerabilities;
  3. applying corrective measures to address the vulnerabilities; and
  4. reporting to the IT Security Team on progress in responding to vulnerabilities

(78) Depending on how urgently a technical vulnerability needs to be addressed, the action taken should be carried out according to the change management controls or by following the UON Information Security Incident Management Guidelines.

(79) Responsibilities for vulnerability response must be included in service agreements with suppliers.

Restrictions on Software Installation

(80) Rules governing the installation of software by users must be established and implemented.

(81) Users are not allowed to install software on University devices unless specifically authorised by a System Owner or a system administrator. System Owners are responsibility for the installation of software, updates and patches.

Top of Page

Section 8 - Information Security Audit Considerations

Objective

(82) To minimise the impact of audit activities on production systems

Information systems Audit Controls

(83) Audit requirements and activities involving checks on production systems must be planned and approved to minimise disruption to business processes.

(84) Prior to commencing compliance checking activities such as audits or security reviews of production systems the CIO, and the Information owner must define, document and approve the activities.  Among the items upon which they must agree are:

  1. the audit requirements and scope of the checks;
  2. audit personnel must be independent of the activities being audited;
  3. the checks must be limited to read-only access to software and data, except for isolated copies of system files, which must be erased or given appropriate protection if required when the audit is complete;
  4. the resources performing the checks must be explicitly identified;
  5. existing security metrics will be used where possible;
  6. all access must be monitored and logged and all procedures, requirements and responsibilities must be documented.
  7. audit tests that could affect system availability must be run outside business hours; and
  8. appropriate personnel must be notified in advance in order to be able to respond to any incidents resulting from the audit.