Vulnerability Lifecycle

 · 8 mins read

I was given the opportunity to talk about the lifecycle of vulnerabilities for a cybersecurity focus group with AITP of the Ozarks. Some of the content was a bit dry, but laying a proper context and talking about legal obligations concerning bug hunting and disclosures is an important part of the process. Overall, I was happy to share some knowledge and experiences, and someday I hope to give a disclosure for CVE-2017-12301; however, sharing knowledge of the vulnerability lifecycle can be just as beneficial (if not, moreso in my opinion), especially for those aspiring to get into bug hunting/exploit development. Below you will find the outline I used as the basis of my talk. There was a lot of content to cover in an hour, and I fumbled around a bit with this being my 2nd public speaking event and I didn’t have time to cover some of the content. This is pretty ugly in its current format, but part of the fun is doing research on your own. It’s my sincerest hope that everyone was able to take something away from the presentation. I truly believe anyone - no matter their life or educational background - can learn how to find bugs and develop exploits for them; however, I also believe very few will ever truly become good at it. This is an obscure area of infosec to get into, but hopefully the talk was able to shed some light on how to go about doing it. More or less, I think the talk was successful despite being given with a browser and notepad - although, it’s easy to go on side-rants in any speaking endeavor. On to the outline!

What are vulnerabilities?

  • A vulnerability is a weakness or exposure in a technology, protocol or design of an information technology system such as hardware, firmware and software. (ISC2)
  • An exploit is software, a subset of data, or a sequence of commands that takes advantage of a bug or vulnerability to cause unintended or unanticipated behavior to occur (ISC2)


  • 16555 vulnerabilities disclosed in 2018
  • 1545 were critical

Who discovers them?

  • Industry Professionals
  • Students/Independent Researchers
  • Government Entities
  • Criminals

What are some affected products/services?

  • Operating System Kernels
  • Web applications
  • Server applications
  • Protocols
  • PBX systems
  • IOT
  • Healthcare products
  • Anything with computational resources

What are CVEs?

  • A way to uniquely identify and name disclosures in a standardized format
  • Mitre is a public dictionary of known vulnerabilities (not a database)
  • CVE’s facilitate confidence in triage with stakeholders
  • CNA (CVE Numbering Authorities)
  • Two root CNAs: MITRE, JPCERT/CC
  • Typically major vendors are also CNAs (i.e. Microsoft, Adobe, Dell, other major developers)
  • Any organization can become a CNA
  • Overwhelming majority of CVEs are allocated from MITRE
  • JPCERT/CC primarily deals with Eastern companies & products

The important thing to note here is if you discover a vulnerability you should send the bug report to the relevant CNA - NOT customer/product support centers. CNA’s will handle the reservation and allocation of a CVE, and coordinate with the appropriate developing party for remediation.


  • Understand the Terms of Use for products you are testing
  • Know in advance if the vendor/developer has a bug reporting program
  • Comply with scope if doing a bug bounty program
  • Computer Fraud & Abuse Act of 1986
  • Easy to land in jail if you’re not careful

Not every vendor participates in a bug bounty program. Be very careful when asking for monetary compensation when submitting a bug report as this can easily be misintepreted as extortion. If a company participates in a bug bounty program they will (or at the very least should) state in plain language that they do. Please do not make the assumption that a vendor or developer you are reporting to participates in bug bounties if they do not clearly state that beforehand. An appropriate way to go about doing this if you are unsure is to ask “is this vulnerability eligible for a bug bounty?” This does not make any presumptions, and gives you benefit of the doubt in the rare and unfortunate event of you ending up in court.

Vulnerability Lifecycle

  1. Discovery

    1. Code review/auditing (whitebox testing)
    2. Fuzzing (blackbox testing)
  2. Testing

    1. Anomolies discovered
    2. Error-based feedback
    3. Triage
  3. Exploitation

    1. proof of concept developed (zero-day)
    2. details of vulnerability become a packaged product
  4. Reporting

    1. Positive circumstances (bug report sent to vendor, company, developer, etc)
    2. Negative circumstances (exploit put up for sale on dark web, black market, or kept hidden by government entities)
    3. Lifecycle can branch into two directions depending on who initially discovers the vulnerability
  5. Response

    1. CVE assigned - developer gives patch
    2. No CVE - but developer gives patch/hotfix
    3. No CVE - developer does not patch
    4. Remains zero-day/used/sold on black market
  6. Disclosure

    1. Grace period (exploit disclosed typically 90 days after patching)
    2. No grace period (exploit disclosed immediately)
    3. No grace period (exploit sold on black market)
    4. Challenges (NDA’s, intellectual property, vendor acknowledgement, malware)

Giving a grace period to disclose after a patch has been released is one of the most critical times of the vulnerability lifecycle. WannaCry was the perfect example of how this can go wrong. Disclosing exploits is very cool and interesting, but giving the industry time to patch against a vulnerability is of utmost importance for the stability of products and safety of consumers.

When are they patched?

  • Typically within 3 to 6 months of bug report
  • Zero-days have an average life-span of 6.9 years (“Zero Days, Thousands of Nights” - Lillian Ablon, Andy Bogart)


  • GNU Debugger (GDB)
  • x32/x64debug, Immunity Debugger (Ollydbg out of development for 5 years)
  • (Immunity plugin), PEDA (GDB plugin)
  • Radare2, IDA Pro (Disassemblers)
  • Browser consoles
  • Metasploit Framework (e.g. pattern_offset, pattern_create, nasm_shell)

Exploit Mitigations

  • ASLR (Address Space Layout Randomization)
  • DEP/NX-bit (Data Execution Prevention, No-eXecutable bit)
  • Stack canaries/cookies (random value integer inserted before stack return pointer)
  • SafeSEH/SEHOP (integrity checks for structured exception handler overwrites)
  • Null Pointer Dereference Protection
  • Isolated Heaps (isolated locations of critical HTML objects)
  • Protected Free
  • EAF/EAF+ (Export Address Table Filtering)
  • ControlFlowGuard / Return Flow Guard (identifies indirect calls & restricts where they can execute code from)
  • Sandboxing

This isn’t an all-inclusive list, and many more technologies exist for the purpose of mitigating exploits. All of these mitigation technologies can be bypassed, although with varying levels of difficulty. As long as people are the ones designing systems there will always be flaws in them. I highly recommend watching this webcast by Stephen Sims from SANS to get a much better explanation of these exploit mitigation technologies:

Utilizing ROP on Windows 10 - Stephen Sims

Windows Defender Exploit Guard

  • Windows Defender Exploit Guard (EMET replacement)
  • Get-ProcessMitigation -System (system configurations - can also use Set-ProcessMitigation to change options)
  • Attack Surface Reduction (more on this later in WebApp)
  • Network protection (blocks outbound-processes)
  • Controlled folder access (real-time alerting and blocking on access or modification)

Mitigations (Web/App)

  • Attack Surface Reduction (Windows Defender Exploit Guard feature: disabling of embedded objects, Office, script, and e-mail-based threats)
  • PDO::prepare() / prepared statements (dynamically built-in sanitization of queries and statements)
  • Security Headers (HSTS, HPKP, X-Content-Type-Options, X-Frame-Options, X-XSS-Protection)
  • Web Application Firewalls (WAF)
  • Containers (Docker, Kubernetes - can provide an additional layer of segmentation between applications and their hosts)
  • Filtering (static & undesirable, but sometimes that’s the only way to sanitize an input so it’s still valuable to mention it)

Once again, this isn’t an all-inclusive list of mitigation technologies. Some of these work better than others, but for the purposes of preemptive defense it’s ideal to have one or more of these working together. For the purposes of attacking, knowing which of these you are facing during a bug hunting expedition will make your life easier.

Exploit Development Goals

  1. Audit code or fuzz for bugs (systematically enumerate ways to escape inputs)
  2. Redirect code flow to controllable circumstances
  3. Continue trial and error until a proof of concept is made
  4. Bypass mitigations (mitigations are getting better as time goes on)
  5. Know your target platform
  6. Design exploit to be as dynamic as possible

Lessons Learned

  • Encourage people to test products
  • Have a responsible disclosure policy
  • Send bug reports to a CNA, not customer support
  • Mitre can help broker disclosures if you don’t know who to send a bug report to
  • Foster vendor/client relations to give lasting value
  • Design applications for compatibility with mitigation technologies
  • Have a plan (legal-wise, proof-of-concept, working with vendor, post-advisory patching)

If this is an area of information security that interests you, I highly recommend you give the links below a good read. There is a very small minority of people that will ever find an unknown vulnerability. I may never find another one for the rest of my career, but hopefully my experiences can shed some light on things to prepare for, who to contact, and what to expect should you find one. Below you will also find some resources to help develop your skills (I highly recommend VulnHub and HackTheBox).