UW-Madison - IT - Cybersecurity Risk Management Implementation Plan Appendices

Applies to all information systems of any kind that store or process data used to accomplish University research, teaching and learning, or administration.

The Cybersecurity Risk Management Policy requires application of the currently approved Implementation Plan to all covered systems.

Appendix A – University of Wisconsin-Madison Cybersecurity Risk Management Framework


Risk is defined as the measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence.(1)

Cybersecurity risk may be presented from external sources or by individual actions of those working inside the network or information systems. The concept of cybersecurity risk includes operational risk to information and technology assets that have consequences affecting the availability, integrity or confidentiality, of information or information systems. This includes the resulting impact from physical or technical threats and vulnerabilities in networks, computers, programs and data. The data focus includes information flowing from or enabled by connections to digital infrastructure, information systems, or industrial control systems, including but not limited to, information security, supply chain assurance, information assurance, and hardware and software assurance.(2)

The process described in this policy is a tool used to arrive at an understanding of risk involving information systems. Risk can be modeled as the likelihood of adverse events over a period of time, multiplied by the potential impact of those events. Risk is never reduced to zero. There is always a level of risk that must be accepted as a cost of doing business. Reducing the risk to an acceptable level is also a cost of doing business. Risk ratings are driven by the Risk Assessment Tool which assigns values to threats, vulnerabilities, and likelihood of exploitation to determine risk.

Systems are monitored to assure that the level of cybersecurity risk is maintained at or below an acceptable level. There are policy and procedural safeguards to assure that personal privacy and academic freedom are respected. The content or use of the data is only of interest to the extent that it indicates the presence of a vulnerability or threat, such as incoming data that is part of an attack on university systems, or outgoing data that indicates a system has already been compromised. University or personal data that is stolen by an attacker is no longer private. Scrupulous monitoring helps protect data from unscrupulous use.


Threat, vulnerability and likelihood of exploitation are complex and unique to specific business processes and technology. Cybersecurity risk is measurable depending on quantified or classified aspects of the data; characteristics of the information system; the definitions and characteristics of internal or external threat, system or environmental vulnerabilities; and the likelihood that the event or situation may manifest itself within a given application, information system or architecture. Internal threats can be accidental or intentional. Vulnerabilities are normally discovered outside of the information environment and reported by trusted sources and characterized against industry norms. The likelihood an event may take place is dependent on the broader spectrum of people, technology and procedures in place to counter the threat and address the vulnerability.

Table 1 below shows broad definitions of cybersecurity issues and the potential risk level that may be assigned to information systems using the Risk Management Framework.

Table 1: Example Issues and Risk Levels
ROOT-LEVEL INTRUSION: an unauthorized person gained root-level access/privileges on a University computer/information system/network device. High
USER-LEVEL INTRUSION: an unauthorized person gained user-level privileges on a University computer/information system/network device. High
ATTEMPTED ACCESS: an unauthorized person specifically targeted a service/vulnerability on a University computer/information system/network device in an attempt to gain unauthorized or increased access/privileges, but was denied access. Moderate
DENIAL OF SERVICE (DOS): use of a University computer/information system/network was denied due to an overwhelming volume of unauthorized network traffic. DOS activity may be reported as High Risk if a significant segment of the University’s networks are disabled or if designated Critical Infrastructure / Key Resources are taken off-line. Moderate
POOR SECURITY PRACTICE: a University computer/information system/network was incorrectly configured or a user did not follow established policy. This activity may be rated as Moderate or High if the practice resulted in significant loss of data or denial of service. Low
SCAN/PROBE: open ports on a University computer/information system/network device were scanned with no DOS or mission impact. Low
MALICIOUS CODE (MALWARE): hostile code successfully infected a University computer/information system/network device. Unless otherwise directed, only those computers that were infected will be reported as a Moderate Risk incident unless the malware has disabled a complete information system or significant segment of the University’s network. Moderate
SUSPICIOUS ACTIVITY (INVESTIGATION): any identified suspicious activity. The event will be investigated as Low risk, and either dismissed or categorized as one of the above types of activity. Low
EXPLAINED ANOMALY: authorized network activity. None


An information system can be defined as discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination, or disposition of information. Information systems also include specialized systems such as industrial/process controls systems, telephone switching and private branch exchange (PBX) systems, and environmental control systems.(3) Each information system should include a security boundary which clearly defines the perimeter of the system and the extent of applicable security controls to be defined and built in to the system. Figure 1 below(4) shows a simple client-server based system with the security boundary shown in green.

Figure 1: The System Security Boundary

Security boundary of a client-server system

The System Security Plan should address the hardware, software, security controls, and administrative or configuration issues associated with security the system and the data within that boundary. The plan should also describe the interactions with adjacent systems and networks and, where necessary, describe the security controls that protect access and secure the data.


The University of Wisconsin-Madison Cybersecurity Risk Management Framework is designed to provide departmental directors and managers, researchers, and information technologists with a tool to determine risk to data and operations of each network or system connected to or serviced by the campus information technology architecture. The Risk Management Framework, also called the RMF, is derived from the National Institute for Standards and Technology Special Publication 800-37 Revision 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach and specifically tailored to meet the requirements and culture at the University. This section describes the RMF processes and implementation details and serves as a guide to determining cybersecurity risk to information systems and network architectures. The RMF consists of six steps that guide the development of a system with information security controls built in. Once development is completed, a formal risk assessment and continued operating checks ensure maintenance of defined risk levels. Figure 2 and Table 2 below describe the steps:

Figure 2: The Risk Management Framework

Superimpose RMF steps on the system development life cycle
Table 2: Steps within the Risk Management Framework
PRE Planning Conducting discovery with the System Owner to aid in their understanding of the RMF and associated tools and processes. Identification of estimated level of effort, schedule and resources occurs here.
1 Categorize the System A data driven and collaborative process where the security requirements of the system are defined by the highest classification of data handled by, or stored within, the system or processes. The System Owner must agree with the System Category to move on to the next step.
2 Select Security Controls Assignment of the administrative, physical and technical controls required to protect the data are drawn from an agreed security controls framework (e.g., NIST 800-53). Alignment with specific compliance programs, (i.e. HIPAA, FERPA, EU GDPR, GLBA, etc.,) is necessary to ensure accuracy. The proper controls are selected by the Risk Analyst in consultation with the System Owner. Controls that are not attainable will be accompanied by a suitable mitigation or explanation from the System Owner will be recorded.
3 Implement and Validate Controls During design and development, the System Owner and Developers ensure the selected controls are incorporated in the system design, validated to provide the desired protections, and verified as operational. Consulting services from the Office of Cybersecurity are available as resources allow.
4 Risk Assessment Independent of the development team, the Office of Cybersecurity conducts a documented assessment to test the selected controls. Residual risk is determined with mitigating factors applied. This stage leads to a formal declaration of risk for the system or network.
5 Authorize the System A final risk review is conducted with a formal declaration of risk provided by the CISO to the responsible Risk Executive who makes the determination whether to (1) operate the system at the defined risk level; (2) further mitigate risk; or (3) decline to allow continued operation.
6 Monitor and Mitigate The System Owner or the Cybersecurity Operations Center should continually assess the operational controls against evolving vulnerability, threat and impact factors. Disruption to operations or loss of data occurs when controls fail, system upgrades occur without proper testing or external factors dictate, determine and implement mitigating controls or return the system to an earlier RMF step. This step is also known as Continuous Diagnostics and Mitigation (CDM).

As shown in Table 3 below, the RMF aligns with the system development life cycle and requires input documentation and information for each step. Output artifacts are produced that are used in planning, development and testing, and certification of risk leading to implementation as shown in the table below.

Table 3: RMF Alignment with System Development Life Cycle
1 Categorize the System Planning and Design
  • Data definition including Classification
  • FISMA determination from Contract
  • Data description
  • System description from SDLC
  • CIS Benchmarks
  • Cybersecurity Project Charter
  • System Security Plan (SSP) Questionnaire checklist
  • Data Security Triage Form
  • IT Security Baseline for Research and Academic Computing Template
  • Interview Checklist(s): e.g., FISMA Controls, HIPPA Test Plan, SA Checklist
2 Select Security Controls Planning and Design
  • Complete and Validated SSP Questionnaire checklist
  • Security Controls Inventory
3 Implement and Validate Controls Develop and Test
  • Configure Security Controls as determined.
  • Completed Package Artifacts
    • SSP
    • Topology, Data Flow, System Security Boundary
    • Ports & Protocols Table
  • Security Controls Workbook (Pre-Assessment)
  • Submitted Cybersecurity Risk Acceptance Request Form
4 Risk Assessment Develop and Test
  • Provide All Audit Scan (host based scans & application based testing)
  • Completed Security Controls Checklist validated by scanning and manual review
  • Develop and Execute Testing Plans (Artifacts not provided will be created by the Office of Cybersecurity)
  • Step Three Deliverables
  • Scanning tool, (i.e. Qualys,) generated Risk Assessment Report plus Analyst notes
  • Executed CCI and NIST checklists
  • Updated systems POAM
  • Validated Step Three Artifacts
  • Residual Risk Report
5 Authorize System Implement
  • Residual Risk Report
  • Step Four deliverables
  • Chief Information Security Officer signed Risk Letter plus Risk Executive’s Endorsement/Approval to Operate
6 Mitigate and Monitor (CDM) Operate
  • Approved scanning tool
  • Control Validation Plan
  • Step Five deliverables
  • Provide Monthly Risk Reports & POAM updates
  • Security Control Validation Report


The time to complete each step within the RMF depends on the data classification, information system size, and technical complexity. Each system will be assigned a Risk Analyst from the Office of Cybersecurity who will consult with and assist the technical teams, developers, system owners, business process owners, IT managers and Risk Executives in navigating the process. Tables 4 and 5 below show a rough estimate of the level of effort for the assigned Risk Analyst for the overall risk assessment effort including all steps in the RMF. Level of effort and time to complete the process should be determined collaboratively at the onset of the project and is the responsibility of the system owner.

The Office of Cybersecurity has limited resources to assist and each engagement would be determined on when assets are available using a “best effort” approach. Table 4 below shows an estimated level of effort based on the type of service needed and the relative size of the information system. This level of effort is contact time with the project only, not calendar hours or days necessary to gather all information, delays due to scheduling challenges, hand off time between reviews, or holiday and weekend hold time. The term “assets” encompasses host terminals, servers, switches, routers, firewalls, intrusion detection or protection systems or peripherals. When defining a system, including all active components that primarily security related is required to properly set the scope of the effort.

Table 4: Estimated Level of Effort
CONSULTING SUPPORT Small 1-5 1 Consultant 40
Medium 6-15 1 Consultant 60
Large 16-50 1 Consultant 60-80
Extra Large 50+ 1 Consultant
1 Specialist
Medium 6-15 1 Consultant 80
Large 16-30 1 Consultant 160
Extra Large 30+ 1 Consultant
1 Specialist
1 Specialist
Medium 6-15 2 Consultants 200
Large 16-50 2 Consultants
1 Specialist
Extra Large 50+ 2 Consultant
2-3 Specialists
1 Specialist
Project dependent
Medium 6-15 2 Consultants
Large 16-30 2 Consultants
1 Specialist
Extra Large 30+ 1 Consultant
2+ Specialists

The time estimated within each step of the RMF is shown in the table below and reflects a rough estimate of calendar days, weeks or months to process through each step given information is available and testing windows can be scheduled. Time to obtain a Risk Executive Signature is wholly dependent on the organization and the System Owner communications with the Risk Executive.

Table 5: Estaimated Elapsed Time within each Step of the RMF
PLANNING Small 2 weeks
Medium 2 weeks
large 2-3 weeks
Extra Large 2-3 weeks
Medium 1 day
large 1 day
Extra Large 2 days
Medium 1 day
large 2 days
Extra Large 1 week
STEP 3: IMPLEMENT AND VALIDATE CONTROLS Small System Owner and Project Team dependent
Extra Large
STEP 4: RISK ASSESSMENT Small 2 days (depending on test duration needed)
Medium <5 days (depending on test duration needed)
large 1.5-2 weeks (depending on test duration needed)
Extra Large >2 weeks (depending on test duration needed)
Medium 1 day
large <2 days
Extra Large 2 days
Medium 1 day
large <2 days
Extra Large 2 days

A full description of each service and activities that take place in each step of the RMF along with information on the related cost is available upon request from the Office of Cybersecurity.


For most information systems and applications, there are security controls that can be inherited from the surrounding infrastructure or adjacent business processes or systems within the architecture. System Owners and Risk Executives should consider a security control as “inheritable” if it is a verified security asset. Much like an inheritance receive from the death of a relative, it’s not real until it has been verified to exist and is functioning.

Information Systems inherit controls from an architecture or program like a child inherits heirlooms, property or money from a parent. System owners can allow “inheritance of a security control to another architecture much as the deceased addressed the disposition of their earthly items in their Last Will and Testament. When an information system allows a control to be “inherited” and used by another system or architecture, the “parent” System Owner is responsible for keeping the control functioning – including making available to the “child” System Owner a record of periodic verification of that control.

Finally, the inherited control has to be appropriate for the system or architecture. For example, inheriting multi-factor authenticator management from the current UW System Human Resources System (HRS) which is using Symantec Multi-factor Authentication (MFA) and applying that control to a research data warehouse system where we want to have Duo MFA in place is, by rote, a control you cannot inherit where inheriting the availability of backup power supplied to a data center can cover a broad group of systems if housed within that data center.

Inherited security controls should be clearly marked within the Risk Assessment Tool and the Plan of Action and Milestones for the information system.


Questions and comments to this document can be directed to the Office of Cybersecurity at cybersecurity@cio.wisc.edu.


  1. From NISTIR 7298 Revision 2, Glossary of Key Information Security Terms, dated May 2013
  2. From A Taxonomy of Operational Cyber Security Risks by James Cebula and Lisa Young, Carnegie-Mellon University Software Engineering Institute, dated December 2010.
  3. From NIST IR 7298 Revision 2, Glossary of Key Information Security Terms, dated May 2013
  4. From University of Florida article Creating an Information System/Data Flow Diagram found at https://security.ufl.edu/it-workers/risk-assessment/creating-an-information-systemdata-flow-diagram/

Appendix B – Initial List of Risk Executives

A current list of Risk Executives is available on the Office of Cybersecurity's campus partner resources webpage.

A01 General Education Admin

A02 General Services, AIMS

A03 Business Services

A04 Division of Student Life

A05 Enrollment Management

A06 Division of Information Technology (DoIT)

A07 College of Agriculture and Life Sciences

A10 International Division

A12 Wisconsin School of Business

A17 School of Education

A18 Arts Institute

A19 College of Engineering

A27 School of Human Ecology

A34 Vice Chancellor for Research & Graduate Education

A40 Nelson Institute for Environmental Studies

A42 Division of Intercollegiate Athletics

A45 Law School

A48 College of Letters & Sciences

A49 General Library System

A52 Wisconsin State Lab of Hygiene

A53 School of Medicine and Public Health

A54 School of Nursing

A56 School of Pharmacy

A57 University Health Services

A71 Facilities Planning & Management

A77 University of Wisconsin Police

A80 Recreational Sports

A85 University Housing

A87 School of Veterinary Medicine

A88 Wisconsin Veterinary Diagnostic Lab

A93 Division of Continuing Studies

A96 Wisconsin Union

Appendix C – Terms, Definitions and Acronyms


The terms and definitions shown below are provided to clarify specific characteristics of cybersecurity articulated within this document. Reference to source documents are provided as necessary to ensure complete understanding

A software program hosted by an information system. (NIST SP 800-37r1, Appendix B)
Ensuring timely and reliable access to and use of information. (44 U.S.C., Sec. 3542)
Authorization (to operate)
The official management decision given by the Risk Executive to authorize operation of an information system and to explicitly accept the risk to organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, and other organizations, based on the implementation of an agreed-upon set of security controls. (NIST SP 800-37r1, Appendix B, Adapted)
Preserving authorized restrictions on information access and disclosure, including means for protecting personal privacy and proprietary information. (44 U.S.C., Sec. 3542)
The ability to protect or defend the use of cyberspace from cyber-attacks (CNSS 4009). Derived from the term “cybernetics” which is the scientific study of communication and control processes in biological, mechanical, and electronic systems and originated from Greek kubernan meaning to steer or control (OED).
Data Governance
As defined by the implementation of the UW–Madison data management framework, (in progress). For more information contact itpolicy@cio.wisc.edu.
Information Category
As defined in National Institute of Standards and Technology Special Publication 800-60 (NIST SP 800-60 rev 1), Guide for Mapping Types of Information and Information Systems to Security Categories; Information is categorized according to its information type. An information type is a specific category of information (e.g., privacy, medical, proprietary, financial, investigative, contractor sensitive, security management) defined by an organization or, in some instances, by a specific law, Executive Order, directive, policy, or regulation. UW–Madison information categories are represented on Page 6 of the Introduction to this document.
Information Classification
In the context of information security, is the classification of data based on its level of sensitivity and the impact to the University should that data be disclosed, altered or destroyed without authorization. The classification of data helps determine what baseline security controls are appropriate for that data.
Information System
A discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination, or disposition of information. (See 44 U.S.C., Sec. 3502; OMB Circular A-130, Appendix III)
Information Security
The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability. (44 U.S.C., Sec. 3542)
Guarding against improper information modification or destruction, and includes ensuring information non-repudiation and authenticity. (44 U.S.C., Sec. 3542)
Plan of Actions and Milestones (POAM)
A document that identifies tasks needing to be accomplished. It details resources required to accomplish the elements of the plan, any milestones in meeting the tasks, and scheduled completion dates for the milestones. (OMB Memorandum 02-01)
A measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence. (FIPS 200, Adapted)
Risk Analyst
Individual from the Office of Cybersecurity assigned to help capture and refine information security requirements and ensure their integration into information technology component products and information systems through purposeful security design or configuration. (NIST SP 800-37r1, Appendix B, Adapted)
Risk Assessment
The process of identifying risks to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, and other organizations, resulting from the operation of an information system. (NIST SP 800-37r1, Appendix B, Adapted)
Risk Executive
The Risk Executive should be an executive or director, (e.g., Dean or their appointee, department chair, director of a research lab, etc.,) within the academic / functional unit, or in the line of authority above that unit. The Risk Executive must have the authority to accept the risk of operating the system on behalf of the institution and should be in the unit who will ultimately be responsible for paying for a breach (i.e., Dean or their appointee, department, research lab, etc.) (Cybersecurity Risk Management Implementation Plan)
Risk Executive (Function)
An individual or group within an organization that helps to ensure that: (i) security risk-related considerations for individual information systems, to include the authorization decisions, are viewed from an organization-wide perspective with regard to the overall strategic goals and objectives of the organization in carrying out its missions and business functions; and (ii) managing information system-related security risks is consistent across the organization, reflects organizational risk tolerance, and is considered along with other organizational risks affecting mission/business success.
Risk Management
The process of managing risks to organizational operations (including mission, functions, image, reputation), organizational assets, individuals, other organizations, and the Nation, resulting from the operation of an information system, and includes: (i) the conduct of a risk assessment; (ii) the implementation of a risk mitigation strategy; and (iii) employment of techniques and procedures for the continuous monitoring of the security state of the information system. (FIPS 200, Adapted)
Risk Register
A database managed by the Office of Cybersecurity that contains records for each Information System to which the Risk Management Framework is applied.
Security Category
“The characterization of information or an information system based on an assessment of the potential impact that a loss of confidentiality, integrity, or availability of such information or information system would have on organizational operations, organizational assets, or individuals.” (FIPS 199, Appendix A, p.8)
Security Controls
The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the confidentiality, integrity, and availability of the system and its information. (FIPS 199)
Security Control Inheritance
A situation in which an information system or application receives protection from security controls (or portions of security controls) that are developed, implemented, assessed, authorized, and monitored by entities other than those responsible for the system or application; entities either internal or external to the organization where the system or application resides. (NIST SP 800-37r1, Appendix B)
System Owner
Official responsible for the overall procurement, development, integration, modification, or operation and maintenance of an information system. (NIST SP 800-37r1, Appendix B)
System Security Plan
Formal document that provides an overview of the security requirements for an information system and describes the security controls in place or planned for meeting those requirements. (NIST SP 800-37r1, Appendix B; See: NIST SP 800-18)
Any circumstance or event with the potential to adversely impact organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, or other organizations through an information system via unauthorized access, destruction, disclosure, modification of information, and/or denial of service. (NIST SP 800-37r1, Appendix B, Adapted)
Threat Source
The intent and method targeted at the intentional exploitation of a vulnerability or a situation and method that may accidentally trigger a vulnerability. Synonymous with threat agent. (NIST SP 800-37r1, Appendix B)
Weakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source. (NIST SP 800-37r1, Appendix B)


Table 1 below provides the long title associated with acronyms or abbreviations used in this document.

Table 1: Acronyms and Abbreviations
Acronym or Abbreviation Long Title
D-CISO Deputy Chief Information Security Officer
CDM Continuous Diagnostics and Mitigation
CIO Chief Information Officer
CISO Chief Information Security Officer
DoIT Division of Information Technology
FERPA Family Educational Rights and Privacy Act of 1974
HCC Health Care Component (of UW–Madison)
HIPAA Health Insurance Portability and Accountability Act of 1996
HITECH Health Information Technology for Economic and Clinical Health Act
HRS Human Resource System
IRB Institutional Review Board
ITC Information Technology Committee
NIST National Institute for Standards and Technology
NIST SP NIST Special Publication
PCI-DSS Payment Card Industry Data Security Standard
PHI Protected Health Information
PII Personally Identifiable Information
PAT Policy Planning and Analysis Team
POAM Plan of Actions and Milestones
RMF Risk Management Framework
SDLC System Development Life Cycle
SETA Security Education, Training, and Awareness
SFS Shared Financial System
UW–Madison University of Wisconsin–Madison
UW–MIST UW–Madison Information Security Team
UWSA University of Wisconsin System Administration
VCFA Vice Chancellor for Finance and Administration
VP IT Vice Provost for Information Technology

Keywordsrisk   Doc ID81352
OwnerTim B.GroupIT Policy
Created2018-04-03 10:42:26Updated2023-03-30 14:54:23
SitesIT Policy
Feedback  0   0