Cybercrime Case Study: Verizon Lessons Learned

Special “privileged” abuse.

“The greater the power, the more dangerous the abuse.” —Edmund Burke
Detection and validation
The RISK Team was called in to investigate an insider threat-related data breach.An organization was in the middle of a buyout and was utilizing retention contractsto prevent employee attrition. Based on an anonymous tip from an employee,suspicion was raised that a middle manager, hereafter referred to as “John,” hadaccess to, and was abusing, the CEO’s email account.
Response and investigation
Late one evening after the employees had left the building, we arrived to meet withthe Director of IT. He had no knowledge—nor the apparent “need to know”—ofthe incident, but was there to provide us with access to the systems and data. Weworked throughout the night to perform forensic acquisitions of the CEO’s system,the suspect’s system, web-based email logs, and sundry other evidence sources.At just past midnight, we finally received the access we needed and were ready todig-deeper, as our IT contact took off for home in search of some zzzs.
We needed to quickly establish if there was any truth to the claim that the middle manager was reading the CEO’s email. Was it possible that the CEO’s email archive was being shared across the network? Did the suspect have access rights to the CEO’s mailbox through Microsoft Exchange? Was the suspect accessing the CEO’s email through Microsoft Outlook Web Access (OWA)? The answer to all these questions was ultimately “no.” While there are many ways to view someone’s email, our cursory review of the system images and associated logs yielded
nothing.As the next day drew on, the lack of a “smoking gun,” not to mention sleep, leftour brains fried. After hitting the vending machine, we refocused and changed ourapproach. We swung back to the basics, started brainstorming, and sharpenedOccam’s razor by asking ourselves the simplest questions: How does email comeinto an organization? It usually comes from the internet through some spam filterbefore hitting the mail server. Did this organization have an onsite spam filter? Yes,a quick glance at a crude network diagram showed a standard spam filter setup.The appliance itself wasn’t a standardized system that we could acquireforensically. With credentials provided by our IT contact, we logged in and noticedthat the filter was set up to log all incoming emails including the CEO’s. This wasa bit odd, but not necessarily unusual. A speedy check for the access logs to thisappliance revealed that they had been recently deleted. We felt like we were ontosomething.
At this point, we needed to know who had access to the spam filter. Apparently,a few IT administrators had access, and none of them was John. In casualconversations with the IT director, we inquired about personal relationshipsbetween John and the short list of other employees. Bingo! It just so happened thatone of the IT administrators, hereafter referred to as “Kevin,” was very good friendswith John.
Armed with this nugget of knowledge, we took an image of Kevin’s system. LikeJohn’s, Kevin’s system had zero in terms of web-browsing history. Thanks to ourinsight gained from the spam filter, we knew exactly which text “strings” to look for.A keyword search of the unallocated clusters (currently unused space potentiallycontaining artifacts of previous activity) on both systems revealed strings associated with logging into the spam filer and looking at the CEO’s incoming email through good ole Kevin’s administrator account. It turns out that Kevin had given John his credentials to log into the appliance and read incoming email for potentially any employee. In addition, John’s system showed signs of having used Kevin’s credentials to browse sensitive file shares and conduct other unauthorized actions.

“Ask the data”
A peek into the incident data that feeds into the DBIR shows that unlike this example, the majority (63%) of data breaches over the previous three years involving “insider and privilege misuse” were financially motivated. End-users with access to Personally Identifiable Information (PII) and bank employees with access to banking information are more prevalent than system administrators using privileged access. A pessimist would argue that this is because misuse leading to identity theft or fraudulenttransactions is only identified as a result of the post-compromise fraud.

Remediation and recovery
We promptly reported our findings to the CEO, who then informed the legal andhuman resource (HR) departments. Soon thereafter, the decision was made tointerview the two employees before moving forward. During the interviews, bothemployees denied any association with the spam filter, the CEO’s email and thesensitive file shares. But the facts uncovered by our investigation left no doubt ofthe facts. After having worked a few insider cases, you begin to learn that mostpeople, no matter how hard they try, or how comfortable they feel, aren’t verygood liars.
Upon completion of the interviews, the two employees in question receivedpersonal escorts out of the building. Needless to say, after this incident, the firmrevisited its spam filter policy by reconfiguring it to log only flagged messages.
“Bob, the force-multiplier”
One of the most memorable insider cases we have ever seen involved aUS-based company asking for our help in understanding some anomalousactivity that it was witnessing in its Virtual Private Network (VPN) logs.This organization had been slowly moving toward a more telecommutingorientedworkforce, and had therefore started to allow developers to workfrom home on certain days. In order to accomplish this, it had set up afairly standard VPN concentrator approximately two years prior to thisevent.
The IT security department decided that it should start actively monitoringlogs being generated at the VPN concentrator. It began scrutinizing dailyVPN connections into its environment, and before long found an open andactive VPN connection from Asia! When one considers that this companyfell into the designation of US critical infrastructure, it’s hard to overstatethe possible implications of such an occurrence.The company had implemented two-factor authentication for theseVPN connections. The second factor was a rotating token key fob. Thedeveloper whose credentials were being used was sitting at his desk in theoffice. Plainly stated, the VPN logs showed him logged in from China, yetthe employee was right there, sitting at his desk, staring into his monitor.The company initially suspected some kind of unknown malware that wasable to route traffic from a trusted internal connection to China and thenback. What other explanation could there be?As it turns out, Bob had simply outsourced his own job to a foreignconsulting firm. Bob spent less than one fifth of his six-figure salary payinga foreign firm to do his job for him. Authentication was no problem. Hephysically FedEx’d his token to Asia so that the third party contractorcould login under his credentials during the workday. It appeared that Bobwas working an average 9 to 5 workday. Investigators checked his webbrowsinghistory, and that told the whole story. A typical “work day” for Bob looked like this:
9:00 AM—Arrive and surf Reddit for a couple of hours. Watch cat videos. 11:30 AM—Take lunch. 1:00 PM—eBay time. 2:00ish PM—Facebook updates and LinkedIn. 4:30 PM—End of day update email to management. 5:00 PM—Go home.
Evidence even suggested he had the same scam going across multiplecompanies in the area. All told, it looked like he earned several hundredthousand dollars a year, and only had to pay the foreign consulting firmabout $50K annually. The best part? Investigators had the opportunity toread through his performance reviews while working alongside HR. For thepast several years in a row, he received excellent remarks. His code wasclean, well written, and submitted in a timely fashion. Quarter after quarter,his performance review noted him as the best developer in the building.Nice work, Bob!