Applies to all information systems of any kind that store or process data used to accomplish University research, teaching and learning, or administration.
The Cybersecurity Risk Management Policy requires application of the currently approved Implementation Plan to all covered systems.
Risk is defined as the measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence.(1)
Cybersecurity risk may be presented from external sources or by individual actions of those working inside the network or information systems. The concept of cybersecurity risk includes operational risk to information and technology assets that have consequences affecting the availability, integrity or confidentiality, of information or information systems. This includes the resulting impact from physical or technical threats and vulnerabilities in networks, computers, programs and data. The data focus includes information flowing from or enabled by connections to digital infrastructure, information systems, or industrial control systems, including but not limited to, information security, supply chain assurance, information assurance, and hardware and software assurance.(2)
The process described in this policy is a tool used to arrive at an understanding of risk involving information systems. Risk can be modeled as the likelihood of adverse events over a period of time, multiplied by the potential impact of those events. Risk is never reduced to zero. There is always a level of risk that must be accepted as a cost of doing business. Reducing the risk to an acceptable level is also a cost of doing business. Risk ratings are driven by the Risk Assessment Tool which assigns values to threats, vulnerabilities, and likelihood of exploitation to determine risk.
Systems are monitored to assure that the level of cybersecurity risk is maintained at or below an acceptable level. There are policy and procedural safeguards to assure that personal privacy and academic freedom are respected. The content or use of the data is only of interest to the extent that it indicates the presence of a vulnerability or threat, such as incoming data that is part of an attack on university systems, or outgoing data that indicates a system has already been compromised. University or personal data that is stolen by an attacker is no longer private. Scrupulous monitoring helps protect data from unscrupulous use.
Threat, vulnerability and likelihood of exploitation are complex and unique to specific business processes and technology. Cybersecurity risk is measurable depending on quantified or classified aspects of the data; characteristics of the information system; the definitions and characteristics of internal or external threat, system or environmental vulnerabilities; and the likelihood that the event or situation may manifest itself within a given application, information system or architecture. Internal threats can be accidental or intentional. Vulnerabilities are normally discovered outside of the information environment and reported by trusted sources and characterized against industry norms. The likelihood an event may take place is dependent on the broader spectrum of people, technology and procedures in place to counter the threat and address the vulnerability.
Table 1 below shows broad definitions of cybersecurity issues and the potential risk level that may be assigned to information systems using the Risk Management Framework.
DESCRIPTION | RISK LEVEL |
---|---|
ROOT-LEVEL INTRUSION: an unauthorized person gained root-level access/privileges on a University computer/information system/network device. | High |
USER-LEVEL INTRUSION: an unauthorized person gained user-level privileges on a University computer/information system/network device. | High |
ATTEMPTED ACCESS: an unauthorized person specifically targeted a service/vulnerability on a University computer/information system/network device in an attempt to gain unauthorized or increased access/privileges, but was denied access. | Moderate |
DENIAL OF SERVICE (DOS): use of a University computer/information system/network was denied due to an overwhelming volume of unauthorized network traffic. DOS activity may be reported as High Risk if a significant segment of the University’s networks are disabled or if designated Critical Infrastructure / Key Resources are taken off-line. | Moderate |
POOR SECURITY PRACTICE: a University computer/information system/network was incorrectly configured or a user did not follow established policy. This activity may be rated as Moderate or High if the practice resulted in significant loss of data or denial of service. | Low |
SCAN/PROBE: open ports on a University computer/information system/network device were scanned with no DOS or mission impact. | Low |
MALICIOUS CODE (MALWARE): hostile code successfully infected a University computer/information system/network device. Unless otherwise directed, only those computers that were infected will be reported as a Moderate Risk incident unless the malware has disabled a complete information system or significant segment of the University’s network. | Moderate |
SUSPICIOUS ACTIVITY (INVESTIGATION): any identified suspicious activity. The event will be investigated as Low risk, and either dismissed or categorized as one of the above types of activity. | Low |
EXPLAINED ANOMALY: authorized network activity. | None |
An information system can be defined as discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination, or disposition of information. Information systems also include specialized systems such as industrial/process controls systems, telephone switching and private branch exchange (PBX) systems, and environmental control systems.(3) Each information system should include a security boundary which clearly defines the perimeter of the system and the extent of applicable security controls to be defined and built in to the system. Figure 1 below(4) shows a simple client-server based system with the security boundary shown in green.
Figure 1: The System Security Boundary
The System Security Plan should address the hardware, software, security controls, and administrative or configuration issues associated with security the system and the data within that boundary. The plan should also describe the interactions with adjacent systems and networks and, where necessary, describe the security controls that protect access and secure the data.
The University of Wisconsin-Madison Cybersecurity Risk Management Framework is designed to provide departmental directors and managers, researchers, and information technologists with a tool to determine risk to data and operations of each network or system connected to or serviced by the campus information technology architecture. The Risk Management Framework, also called the RMF, is derived from the National Institute for Standards and Technology Special Publication 800-37 Revision 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach and specifically tailored to meet the requirements and culture at the University. This section describes the RMF processes and implementation details and serves as a guide to determining cybersecurity risk to information systems and network architectures. The RMF consists of six steps that guide the development of a system with information security controls built in. Once development is completed, a formal risk assessment and continued operating checks ensure maintenance of defined risk levels. Figure 2 and Table 2 below describe the steps:
Figure 2: The Risk Management Framework
STEP | ACTIVITY TITLE | DESCRIPTION |
---|---|---|
PRE | Planning | Conducting discovery with the System Owner to aid in their understanding of the RMF and associated tools and processes. Identification of estimated level of effort, schedule and resources occurs here. |
1 | Categorize the System | A data driven and collaborative process where the security requirements of the system are defined by the highest classification of data handled by, or stored within, the system or processes. The System Owner must agree with the System Category to move on to the next step. |
2 | Select Security Controls | Assignment of the administrative, physical and technical controls required to protect the data are drawn from an agreed security controls framework (e.g., NIST 800-53). Alignment with specific compliance programs, (i.e. HIPAA, FERPA, EU GDPR, GLBA, etc.,) is necessary to ensure accuracy. The proper controls are selected by the Risk Analyst in consultation with the System Owner. Controls that are not attainable will be accompanied by a suitable mitigation or explanation from the System Owner will be recorded. |
3 | Implement and Validate Controls | During design and development, the System Owner and Developers ensure the selected controls are incorporated in the system design, validated to provide the desired protections, and verified as operational. Consulting services from the Office of Cybersecurity are available as resources allow. |
4 | Risk Assessment | Independent of the development team, the Office of Cybersecurity conducts a documented assessment to test the selected controls. Residual risk is determined with mitigating factors applied. This stage leads to a formal declaration of risk for the system or network. |
5 | Authorize the System | A final risk review is conducted with a formal declaration of risk provided by the CISO to the responsible Risk Executive who makes the determination whether to (1) operate the system at the defined risk level; (2) further mitigate risk; or (3) decline to allow continued operation. |
SYSTEM IS OPERATIONAL | ||
6 | Monitor and Mitigate | The System Owner or the Cybersecurity Operations Center should continually assess the operational controls against evolving vulnerability, threat and impact factors. Disruption to operations or loss of data occurs when controls fail, system upgrades occur without proper testing or external factors dictate, determine and implement mitigating controls or return the system to an earlier RMF step. This step is also known as Continuous Diagnostics and Mitigation (CDM). |
As shown in Table 3 below, the RMF aligns with the system development life cycle and requires input documentation and information for each step. Output artifacts are produced that are used in planning, development and testing, and certification of risk leading to implementation as shown in the table below.
STEP | ACTIVITY TITLE | PROJECT PHASE | INPUT DOCUMENTS AND ACTIVITIES | OUTPUT DOCUMENTS AND ACTIVITIES |
---|---|---|---|---|
1 | Categorize the System | Planning and Design |
|
|
2 | Select Security Controls | Planning and Design |
|
|
3 | Implement and Validate Controls | Develop and Test |
|
|
4 | Risk Assessment | Develop and Test |
|
|
5 | Authorize System | Implement |
|
|
6 | Mitigate and Monitor (CDM) | Operate |
|
|
The time to complete each step within the RMF depends on the data classification, information system size, and technical complexity. Each system will be assigned a Risk Analyst from the Office of Cybersecurity who will consult with and assist the technical teams, developers, system owners, business process owners, IT managers and Risk Executives in navigating the process. Tables 4 and 5 below show a rough estimate of the level of effort for the assigned Risk Analyst for the overall risk assessment effort including all steps in the RMF. Level of effort and time to complete the process should be determined collaboratively at the onset of the project and is the responsibility of the system owner.
The Office of Cybersecurity has limited resources to assist and each engagement would be determined on when assets are available using a “best effort” approach. Table 4 below shows an estimated level of effort based on the type of service needed and the relative size of the information system. This level of effort is contact time with the project only, not calendar hours or days necessary to gather all information, delays due to scheduling challenges, hand off time between reviews, or holiday and weekend hold time. The term “assets” encompasses host terminals, servers, switches, routers, firewalls, intrusion detection or protection systems or peripherals. When defining a system, including all active components that primarily security related is required to properly set the scope of the effort.
SERVICE | SYSTEM SIZE | # ASSETS | LABOR REQUIRED | LOE HOURS |
---|---|---|---|---|
CONSULTING SUPPORT | Small | 1-5 | 1 Consultant | 40 |
Medium | 6-15 | 1 Consultant | 60 | |
Large | 16-50 | 1 Consultant | 60-80 | |
Extra Large | 50+ | 1 Consultant 1 Specialist |
120+ | |
CONSULTING AND ASSISTANCE IN DEVELOPING SYSTEM SECURITY PLAN AND ARTIFACTS | Small | 1-5 | 1 Consultant | 60 |
Medium | 6-15 | 1 Consultant | 80 | |
Large | 16-30 | 1 Consultant | 160 | |
Extra Large | 30+ | 1 Consultant 1 Specialist |
200+ | |
CONSULTANT SUPPORT LABOR WITH SSP ARTIFACTS AND FULL TESTING SUPPORT | Small | 1-5 | 1 Consultant 1 Specialist |
120 |
Medium | 6-15 | 2 Consultants | 200 | |
Large | 16-50 | 2 Consultants 1 Specialist |
300 | |
Extra Large | 50+ | 2 Consultant 2-3 Specialists |
500+ | |
CYBERSECURITY ARCHITECTURE AND ENGINEERING | Small | 1-5 | 1 Consultant 1 Specialist |
Project dependent |
Medium | 6-15 | 2 Consultants | ||
Large | 16-30 | 2 Consultants 1 Specialist |
||
Extra Large | 30+ | 1 Consultant 2+ Specialists |
The time estimated within each step of the RMF is shown in the table below and reflects a rough estimate of calendar days, weeks or months to process through each step given information is available and testing windows can be scheduled. Time to obtain a Risk Executive Signature is wholly dependent on the organization and the System Owner communications with the Risk Executive.
STEP WITHIN RMP | SYSTEM SIZE | ESTIMATED HOURS OR DAYS WITHIN EACH STEP |
---|---|---|
PLANNING | Small | 2 weeks |
Medium | 2 weeks | |
large | 2-3 weeks | |
Extra Large | 2-3 weeks | |
STEP 1: CATEGORIZE THE SYSTEM | Small | 1 day |
Medium | 1 day | |
large | 1 day | |
Extra Large | 2 days | |
STEP 2: SELECT SECURITY CONTROLS | Small | 1 day |
Medium | 1 day | |
large | 2 days | |
Extra Large | 1 week | |
STEP 3: IMPLEMENT AND VALIDATE CONTROLS | Small | System Owner and Project Team dependent |
Medium | ||
large | ||
Extra Large | ||
STEP 4: RISK ASSESSMENT | Small | 2 days (depending on test duration needed) |
Medium | <5 days (depending on test duration needed) | |
large | 1.5-2 weeks (depending on test duration needed) | |
Extra Large | >2 weeks (depending on test duration needed) | |
STEP 5: AUTHORIZE THE SYSTEM (PRESENTATION AND CISO SIGNATURE) | Small | <1 day |
Medium | 1 day | |
large | <2 days | |
Extra Large | 2 days | |
STEP 5: AUTHORIZE THE SYSTEM (RISK EXECUTIVE SIGNATURE) | Small | <1 day |
Medium | 1 day | |
large | <2 days | |
Extra Large | 2 days |
A full description of each service and activities that take place in each step of the RMF along with information on the related cost is available upon request from the Office of Cybersecurity.
For most information systems and applications, there are security controls that can be inherited from the surrounding infrastructure or adjacent business processes or systems within the architecture. System Owners and Risk Executives should consider a security control as “inheritable” if it is a verified security asset. Much like an inheritance receive from the death of a relative, it’s not real until it has been verified to exist and is functioning.
Information Systems “inherit” controls from an architecture or program like a child inherits heirlooms, property or money from a parent. System owners can allow “inheritance” of a security control to another architecture much as the deceased addressed the disposition of their earthly items in their Last Will and Testament. When an information system allows a control to be “inherited” and used by another system or architecture, the “parent” System Owner is responsible for keeping the control functioning – including making available to the “child” System Owner a record of periodic verification of that control.
Finally, the inherited control has to be appropriate for the system or architecture. For example, inheriting multi-factor authenticator management from the current UW System Human Resources System (HRS) which is using Symantec Multi-factor Authentication (MFA) and applying that control to a research data warehouse system where we want to have Duo MFA in place is, by rote, a control you cannot inherit where inheriting the availability of backup power supplied to a data center can cover a broad group of systems if housed within that data center.
Inherited security controls should be clearly marked within the Risk Assessment Tool and the Plan of Action and Milestones for the information system.
Questions and comments to this document can be directed to the Office of Cybersecurity at cybersecurity@cio.wisc.edu.
A01 General Education Admin
A02 General Services, AIMS
A03 Business Services
A04 Division of Student Life
A05 Enrollment Management
A06 Division of Information Technology (DoIT)
A07 College of Agriculture and Life Sciences
A10 International Division
A12 Wisconsin School of Business
A17 School of Education
A18 Arts Institute
A19 College of Engineering
A27 School of Human Ecology
A34 Vice Chancellor for Research & Graduate Education
A40 Nelson Institute for Environmental Studies
A42 Division of Intercollegiate Athletics
A45 Law School
A48 College of Letters & Sciences
A49 General Library System
A52 Wisconsin State Lab of Hygiene
A53 School of Medicine and Public Health
A54 School of Nursing
A56 School of Pharmacy
A57 University Health Services
A71 Facilities Planning & Management
A77 University of Wisconsin Police
A80 Recreational Sports
A85 University Housing
A87 School of Veterinary Medicine
A88 Wisconsin Veterinary Diagnostic Lab
A93 Division of Continuing Studies
A96 Wisconsin Union
The terms and definitions shown below are provided to clarify specific characteristics of cybersecurity articulated within this document. Reference to source documents are provided as necessary to ensure complete understanding
.Table 1 below provides the long title associated with acronyms or abbreviations used in this document.
Acronym or Abbreviation | Long Title |
---|---|
D-CISO | Deputy Chief Information Security Officer |
CDM | Continuous Diagnostics and Mitigation |
CIO | Chief Information Officer |
CISO | Chief Information Security Officer |
DoIT | Division of Information Technology |
FERPA | Family Educational Rights and Privacy Act of 1974 |
HCC | Health Care Component (of UW–Madison) |
HIPAA | Health Insurance Portability and Accountability Act of 1996 |
HITECH | Health Information Technology for Economic and Clinical Health Act |
HRS | Human Resource System |
IRB | Institutional Review Board |
ITC | Information Technology Committee |
NIST | National Institute for Standards and Technology |
NIST SP | NIST Special Publication |
PCI-DSS | Payment Card Industry Data Security Standard |
PHI | Protected Health Information |
PII | Personally Identifiable Information |
PAT | Policy Planning and Analysis Team |
POAM | Plan of Actions and Milestones |
RMF | Risk Management Framework |
SDLC | System Development Life Cycle |
SETA | Security Education, Training, and Awareness |
SFS | Shared Financial System |
UW–Madison | University of Wisconsin–Madison |
UW–MIST | UW–Madison Information Security Team |
UWSA | University of Wisconsin System Administration |
VCFA | Vice Chancellor for Finance and Administration |
VP IT | Vice Provost for Information Technology |