Initial commit

This commit is contained in:
iphelix 2013-07-16 11:41:32 -07:00
commit 9b5b99afda
5 changed files with 2221 additions and 0 deletions

702
README Normal file
View File

@ -0,0 +1,702 @@
Password Analysis and Cracking Kit by Peter Kacherginsky (iphelix)
==================================================================
PACK (Password Analysis and Cracking Toolkit) is a collection of utilities developed to aid in analysis of password lists and enhancing cracking of passwords using smart rule generation. It can be used to reverse word mangling rules, generate source words and optimize password masks for the Hashcat family of tools.
NOTE: The toolkit itself is not able to crack passwords, but instead designed to make operation of password crackers more efficient.
Rules Analysis
==================
`rulegen.py` implements password analysis and rule generation for the Hashcat password cracker as described in the [Automatic Password Rule Analysis and Generation](http://thesprawl.org/research/automatic-password-rule-analysis-generation/) paper. Please review this document for detailed discussion on the theory of rule analysis and generation.
Reversing source words and word mangling rules from already cracked passwords can be very effective in performing attacks against still encrypted hashes. By continuously recycling/expanding generated rules and words you may be able to crack a greater number of passwords.
There are several prerequisites for effective use of `rulegen.py`. The tool utilizes Enchant spell-checking library to interface with a number of spell-checking engines such as Aspell, MySpell, etc. You must install these tools prior to the tool use. It is also critical to install dictionaries for whatever spell-checking engine you end up using (alternatively it is possible to use a custom wordlist). You may also need to install Enchant if it is not already installed on your system. At last, I have bundled PyEnchant for convenience which should interface directly with Enchant's shared libraries; however, should there be any issues, simply remove the bundled 'enchant' directory and install PyEnchant for your distribution.
NOTE: Tested and works by default on Backtrack 5 R3
For additional details on specific Hashcat rule syntax see [Hashcat Rule Based Attack](http://hashcat.net/wiki/doku.php?id=rule_based_attack).
Analyzing a Single Password
-------------------------------
The most basic use of `rulegen.py` involves analysis of a single password to automatically detect rules. Let's detect rules and a source word used to generate a sample password `P@55w0rd123`:
$ python rulegen.py --verbose --password P@55w0rd123
_
RuleGen 0.0.1 | |
_ __ __ _ ___| | _
| '_ \ / _` |/ __| |/ /
| |_) | (_| | (__| <
| .__/ \__,_|\___|_|\_\
| |
|_| iphelix@thesprawl.org
[*] Using Enchant 'aspell' module. For best results please install
'aspell' module language dictionaries.
[*] Saving rules to analysis.rule
[*] Saving words to analysis.word
[*] Press Ctrl-C to end execution and generate statistical analysis.
[*] Analyzing password: P@55w0rd123
[+] Password => sa@ ss5 so0 $1 $2 $3 => P@55w0rd123
[*] Finished analysis in 0.00 seconds
There are several flags that we have used for this example:
* --password - specifies a single password to analyze.
* --verbose - prints out verbose information such as generated rules and performance statistics.
As noted in the program output, generated rules and source words were saved in `analysis.rule` and `analysis.word` files respectively:
$ cat analysis.rule
sa@ ss5 so0 $1 $2 $3
$ cat analysis.word
Password
Notice that these two files contain duplicate data. This will come in handy once you start processing large password lists and perform statistical analysis on generated words.
Processing password files is covered in a section below; however, let's first discuss some of the available fine tuning options using a single password as an example.
Specifying output basename
------------------------------
`rulegen.py` saves output files using the 'analysis' basename by default. You can change file basename with the `--basename` flag as follows:
$ python rulegen.py --verbose --basename test --password P@55w0rd123
_
RuleGen 0.0.1 | |
_ __ __ _ ___| | _
| '_ \ / _` |/ __| |/ /
| |_) | (_| | (__| <
| .__/ \__,_|\___|_|\_\
| |
|_| iphelix@thesprawl.org
[*] Using Enchant 'aspell' module. For best results please install
'aspell' module language dictionaries.
[*] Saving rules to test.rule
[*] Saving words to test.word
...
Spell-checking provider
---------------------------
Notice that we are using the `aspell` Enchant module for source word detection. The exact spell-checking engine can be changed using the `--provider` flag as follows:
$ python rulegen.py --verbose --provider myspell --password P@55w0rd123
_
RuleGen 0.0.1 | |
_ __ __ _ ___| | _
| '_ \ / _` |/ __| |/ /
| |_) | (_| | (__| <
| .__/ \__,_|\___|_|\_\
| |
|_| iphelix@thesprawl.org
[*] Using Enchant 'myspell' module. For best results please install
'myspell' module language dictionaries.
...
NOTE: Provider engine priority can be specified using a comma-separated list (e.g. --provider aspell,myspell).
Forcing source word
-----------------------
The use of the source word detection engine can be completely disabled by specifying a source word with the `--word` flag:
$ python rulegen.py -q --verbose --word word --password P@55w0rd123
[*] Analyzing password: P@55w0rd123
[+] word => ^5 ^5 ^@ ^P so0 $1 $2 $3 => P@55w0rd123
[*] Finished analysis in 0.00 seconds
By specifying different source words you can have a lot of fun experimenting with the rule generation engine.
Defining Custom Dictionary
------------------------------
Inevitably you will come across a point where generating rules using the standard spelling-engine wordlist is no longer sufficient. You can specify a custom wordlist using the `--wordlist` flag. This is particularly useful when reusing source words from a previous analysis session:
$ python rulegen.py -q --verbose --wordlist rockyou-top100.word --password ap55w0rd
[*] Using Enchant 'Personal Wordlist' module. For best results please install
'Personal Wordlist' module language dictionaries.
[*] Analyzing password: ap55w0rd
[!] password => {rule length suboptimal: 5 (3)} => ap55w0rd
[!] password => {rule length suboptimal: 5 (3)} => ap55w0rd
[!] password => {rule length suboptimal: 5 (3)} => ap55w0rd
[!] password => {rule length suboptimal: 5 (3)} => ap55w0rd
[!] password => {rule length suboptimal: 4 (3)} => ap55w0rd
[!] password => {rule length suboptimal: 4 (3)} => ap55w0rd
[+] password => k ss5 o50 => ap55w0rd
[*] Finished analysis in 0.00 seconds
Notice that there were multiple valid rules to generate the password `ap55w0rd`; however, only the most optimal rule (based on total rule count) was selected. There may be multiple "optimal" rules produced as long as they all have the same rule length.
Generating Suboptimal Rules and Words
-----------------------------------------
While `rulegen.py` attempts to generate and record only the best source words and passwords, there may be cases when you are interested in more results. Use `--morewords` and `--morerules` flags to generate words and rules which may exceed optimal edit distance:
$ python rulegen.py -q --verbose --password \$m0n3y\$ --morerules --morewords
[*] Using Enchant 'aspell' module. For best results please install
'aspell' module language dictionaries.
[*] Analyzing password: $m0n3y$
[+] money => ^$ so0 se3 $$ => $m0n3y$
[+] Mooney => sM$ o1m so0 se3 $$ => $m0n3y$
[+] mine => ^$ si0 se3 $y $$ => $m0n3y$
[+] mine => ^$ si0 i43 o5y $$ => $m0n3y$
[+] mine => ^$ si0 i43 i5y o6$ => $m0n3y$
[+] Monet => sM$ o1m i20 se3 o5y $$ => $m0n3y$
[+] Monet => sM$ i1m so0 se3 o5y $$ => $m0n3y$
[+] Monet => ^$ l so0 se3 o5y $$ => $m0n3y$
[+] Monet => sM$ o1m i20 se3 i5y o6$ => $m0n3y$
[+] Monet => sM$ i1m so0 se3 i5y o6$ => $m0n3y$
[+] Monet => ^$ l so0 se3 i5y o6$ => $m0n3y$
[+] Monet => sM$ o1m i20 i43 o5y o6$ => $m0n3y$
[+] Monet => sM$ i1m so0 i43 o5y o6$ => $m0n3y$
[+] Monet => ^$ l so0 i43 o5y o6$ => $m0n3y$
[+] moneys => ^$ so0 se3 o6$ => $m0n3y$
[*] Finished analysis in 0.00 seconds
It is possible to further expand generated words using `--maxworddist` and `--maxwords` flags. Similarly, you can produce more rules using `--maxrulelen` and `--maxrules` flags.
Disabling Advanced Engines
------------------------------
`rulegen.py` includes a number of advanced engines to generate better quality words and rules. It is possible to disable them to observe the difference (or if they are causing issues) using `--simplewords` and `--simplerules` flags. Let's observe how both source words and rules change with these flags on:
$ python rulegen.py -q --hashcat --verbose --password \$m0n3y\$ --simplewords --simplerules
[*] Using Enchant 'aspell' module. For best results please install
'aspell' module language dictionaries.
[*] Analyzing password: $m0n3y$
[-] MN => {best distance exceeded: 7 (4)} => $m0n3y$
[+] many => i0$ o20 i43 i6$ => $m0n3y$
[+] mingy => i0$ o20 o43 i6$ => $m0n3y$
[+] money => i0$ o20 o43 i6$ => $m0n3y$
[*] Finished analysis in 0.01 seconds
Notice the quality of generated words was reduced significantly with words like 'MN', 'many' and 'mingy' having little relationship to the actual source word 'money'. At the same time, generated rules were reduced to simple insertions, deletions and replacements.
Processing password lists
-----------------------------
Now that you have mastered all of the different flags and switches, we can attempt to generate words and rules for a collection of passwords. Let's generate a text file `korelogic.txt` containing the following fairly complex test passwords:
&~defcon
'#(4)\
August19681
'&a123456
10-D'Ann
~|Bailey
Krist0f3r
f@cebOOK
Nuclear$(
zxcvbn2010!
13Hark's
NjB3qqm
Sydney93?
antalya%]
Annl05de
;-Fluffy
Now let's observe `rulegen.py` analysis by simply specifying the password file as the first argument:
$ python rulegen.py korelogic.txt
_
RuleGen 0.0.1 | |
_ __ __ _ ___| | _
| '_ \ / _` |/ __| |/ /
| |_) | (_| | (__| <
| .__/ \__,_|\___|_|\_\
| |
|_| iphelix@thesprawl.org
[*] Using Enchant 'aspell' module. For best results please install
'aspell' module language dictionaries.
[*] Saving rules to analysis.rule
[*] Saving words to analysis.word
[*] Press Ctrl-C to end execution and generate statistical analysis.
[*] Analyzing passwords file: testpasswd/korelogic.txt:
[!] '#(4)\ => {skipping alpha less than 25%} => '#(4)\
[!] '&a123456 => {skipping alpha less than 25%} => '&a123456
[*] Finished processing 16 passwords in 0.03 seconds at the rate of 533.33 p/sec
[*] Analyzed 14 passwords (87.50%)
[-] Skipped 0 all numeric passwords (0.00%)
[-] Skipped 2 passwords with less than 25% alpha characters (14.29%)
[-] Skipped 0 passwords with non ascii characters (0.00%)
[*] Top 10 word statistics
[+] Anatolia - 1 (3.33%)
[+] xxxvii - 1 (3.33%)
[+] Nuclear - 1 (3.33%)
[+] defcon - 1 (3.33%)
[+] Sydney - 1 (3.33%)
[+] Kristi - 1 (3.33%)
[+] August - 1 (3.33%)
[+] Annalist - 1 (3.33%)
[+] xxxv - 1 (3.33%)
[+] Facebook - 1 (3.33%)
[*] Saving Top 100 words in analysis-top100.word
[*] Top 10 rule statistics
[+] ^3 ^1 o4r - 2 (3.51%)
[+] i61 i79 i86 i98 oA1 - 2 (3.51%)
[+] se0 i6f i73 o8r - 1 (1.75%)
[+] i3a i5y o6a i7% o8] - 1 (1.75%)
[+] $1 $9 $6 $8 $1 - 1 (1.75%)
[+] i50 o6f sn3 o8r - 1 (1.75%)
[+] i61 i79 i86 sa8 $1 - 1 (1.75%)
[+] i50 i6f i73 o8r - 1 (1.75%)
[+] ^- ^; - 1 (1.75%)
[+] i61 i79 sa6 $8 $1 - 1 (1.75%)
[*] Saving Top 100 rules in analysis-top100.rule
[*] Top 10 password statistics
[+] Annl05de - 1 (7.14%)
[+] Krist0f3r - 1 (7.14%)
[+] 10-D'Ann - 1 (7.14%)
[+] f@cebOOK - 1 (7.14%)
[+] Nuclear$( - 1 (7.14%)
[+] 13Hark's - 1 (7.14%)
[+] ;-Fluffy - 1 (7.14%)
[+] &~defcon - 1 (7.14%)
[+] Sydney93? - 1 (7.14%)
[+] antalya%] - 1 (7.14%)
[*] Saving Top 100 passwords in analysis-top100.password
Using all default settings we were able to produce several high quality rules. In addition to the normally generated unsorted and not uniqued 'analysis.rule' and 'analysis.word' files, processing multiple passwords produces Top 10 statistics for words, rules and passwords (displayed in the output) and Top 100 statistics saved in 'analysis-top100.rule', 'analysis-top100.word' and 'analysis-top100.password' files.
Notice that several passwords such as '#(4)\ and '&a123456 were skipped because they do not have sufficient characteristics to be processed. Other than alpha character count, the program will skip all numeric passwords and passwords containing non-ASCII characters. The latter is due to a bug in the Enchant engine which I hope to fix in the future thus allowing word processing of many languages.
Debugging rules
--------------------
There may be situations where you run into issues generating rules for the Hashcat password cracker. `rulegen.py` includes the `--hashcat` flag to validate generated words and rules using hashcat itself running in --stdout mode. In order for this mode to work correctly, you must download the latest version of hashcat-cli and edit the `HASHCAT_PATH` variable in the source. For example, at the time of this writing I have placed the hashcat-0.42 folder in the PACK directory and defined `HASHCAT_PATH` as 'hashcat-0.42/'.
You can also observe the inner workings of the rule generation engine with the `--debug` flag. Don't worry about messages of certain rule failings, this is the result of the halting problem solver trying to find an optimal and valid solution.
Have fun generating rules!
Masks Analysis
==================
The following tools implement password analysis and mask generation for the Hashcat password cracker. For additional details see [Hashcat Mask Attack](http://hashcat.net/wiki/doku.php?id=mask_attack) Wiki page.
In all of the examples, a standard hashcat notation to represent different character sets is used:
?l - lowercase characters
?u - uppercase characters
?d - digits
?s - special characters
StatsGen
------------
Before we can begin using the toolkit we must establish a selection criteria for the sample input. Since we are looking to analyze the way people create their passwords, we must obtain as large of a list of leaked passwords as possible. One such excellent list is based on RockYou.com compromise. This list both provides large and diverse enough sample that it can be used as a good estimate for common passwords used by similar sites (e.g. social networking). The analysis obtained from this list may not work for organizations with specific password policies (e.g. 8 characters, minimum digit and special character requirements, etc.). As such, selecting sample input should be as close to your target as possible. In addition, try to avoid obtaining lists based on already cracked passwords as it will generate statistics skewed toward the type of passwords that could be cracked and not the overall sample.
In the example below, we will use rockyou.txt containing approximately 14 million passwords. Launch `statsgen.py` with the following command line:
$ python statsgen.py rockyou.txt
Below is the output from the above command:
[*] Analyzing dictionary: rockyou.txt
[+] Analyzing 100% (14344391/14344391) passwords
NOTE: Statistics below is relative to the number of analyzed passwords, not total number of passwords
[*] Line Count Statistics...
[+] 8: 20% (2966004)
[+] 7: 17% (2506264)
[+] 9: 15% (2191000)
[+] 10: 14% (2013690)
[+] 6: 13% (1947858)
[+] 11: 06% (865973)
[+] 12: 03% (555333)
[+] 13: 02% (364169)
[+] 5: 01% (259174)
[+] 14: 01% (248514)
[+] 15: 01% (161181)
[*] Mask statistics...
[+] stringdigit: 37% (5339715)
[+] allstring: 28% (4115881)
[+] alldigit: 16% (2346842)
[+] othermask: 05% (731240)
[+] digitstring: 04% (663975)
[+] stringdigitstring: 03% (450753)
[+] stringspecialstring: 01% (204494)
[+] stringspecialdigit: 01% (167826)
[+] stringspecial: 01% (147874)
[+] digitstringdigit: 00% (130518)
[+] specialstringspecial: 00% (25100)
[+] specialstring: 00% (14410)
[+] allspecial: 00% (5763)
[*] Charset statistics...
[+] loweralphanum: 42% (6075055)
[+] loweralpha: 25% (3726656)
[+] numeric: 16% (2346842)
[+] loweralphaspecialnum: 03% (472673)
[+] upperalphanum: 02% (407436)
[+] mixedalphanum: 02% (382246)
[+] loweralphaspecial: 02% (381095)
[+] upperalpha: 01% (229893)
[+] mixedalpha: 01% (159332)
[+] mixedalphaspecialnum: 00% (53240)
[+] mixedalphaspecial: 00% (49633)
[+] upperalphaspecialnum: 00% (27732)
[+] upperalphaspecial: 00% (26795)
[+] special: 00% (5763)
[*] Advanced Mask statistics...
[+] ?l?l?l?l?l?l?l?l: 04% (688053)
[+] ?l?l?l?l?l?l: 04% (601257)
[+] ?l?l?l?l?l?l?l: 04% (585093)
[+] ?l?l?l?l?l?l?l?l?l: 03% (516862)
[+] ?d?d?d?d?d?d?d: 03% (487437)
[+] ?d?d?d?d?d?d?d?d?d?d: 03% (478224)
[+] ?d?d?d?d?d?d?d?d: 02% (428306)
[+] ?l?l?l?l?l?l?d?d: 02% (420326)
[+] ?l?l?l?l?l?l?l?l?l?l: 02% (416961)
[+] ?d?d?d?d?d?d: 02% (390546)
[+] ?d?d?d?d?d?d?d?d?d: 02% (307540)
[+] ?l?l?l?l?l?d?d: 02% (292318)
[+] ?l?l?l?l?l?l?l?d?d: 01% (273640)
[+] ?l?l?l?l?l?l?l?l?l?l?l: 01% (267742)
[+] ?l?l?l?l?d?d?d?d: 01% (235364)
[+] ?l?l?l?l?d?d: 01% (215079)
[+] ?l?l?l?l?l?l?l?l?d?d: 01% (213117)
[+] ?l?l?l?l?l?l?d: 01% (193110)
[+] ?l?l?l?l?l?l?l?d: 01% (189855)
[+] ?l?l?l?l?l?l?l?l?l?l?l?l: 01% (189360)
[+] ?l?l?l?d?d?d?d: 01% (178308)
[+] ?l?l?l?l?l?d?d?d?d: 01% (173560)
[+] ?l?l?l?l?l?l?d?d?d?d: 01% (160596)
[+] ?l?l?l?l?l?l?l?l?d: 01% (160061)
[+] ?l?l?l?l?l?d?d?d: 01% (152406)
Here is what we can immediately learn from the above list:
* The majority of passwords are 6-10 characters
* The majority of passwords follow masks of the form "string followed by digits", "all string", and "all digits".
* The majority of passwords use lower alphanumeric and lower alpha character sets.
The last section, "Advanced Mask Statistics", contains the actual masks matching the most frequent passwords. Masks are generated by attempting to find the minimum matching set of regular expressions that would match that string.
Individual symbols can be interpreted as follows:
?l - a single lowercase character
?u - a single uppercase character
?d - a single digit
?s - a single special character
For example, the very first mask, "?l?l?l?l?l?l?l?l", will match all of the lowercase alpha passwords. Given the sample size you will be able to crack approximately 4% of passwords. However, after generating the initial output, you may be interested in using filters to narrow down on password data.
Let's see how RockYou users tend to select their passwords using the "stringdigit" mask (a string followed by numbers):
$ python statsgen.py -m stringdigit rockyou.txt
[*] Analyzing dictionary: rockyou.txt
[+] Analyzing 37% (5339715/14344391) passwords
NOTE: Statistics below is relative to the number of analyzed passwords, not total number of passwords
[*] Line Count Statistics...
[+] 8: 23% (1267292)
[+] 7: 18% (981472)
[+] 9: 17% (940000)
[+] 10: 14% (750966)
[+] 6: 11% (619001)
[+] 11: 05% (294874)
[+] 12: 03% (175879)
[+] 13: 01% (103048)
[+] 14: 01% (65959)
[*] Mask statistics...
[+] stringdigit: 100% (5339715)
[*] Charset statistics...
[+] loweralphanum: 88% (4720336)
[+] upperalphanum: 06% (325943)
[+] mixedalphanum: 05% (293436)
[*] Advanced Mask statistics...
[+] ?l?l?l?l?l?l?d?d: 07% (420326)
[+] ?l?l?l?l?l?d?d: 05% (292318)
[+] ?l?l?l?l?l?l?l?d?d: 05% (273640)
[+] ?l?l?l?l?d?d?d?d: 04% (235364)
[+] ?l?l?l?l?d?d: 04% (215079)
[+] ?l?l?l?l?l?l?l?l?d?d: 03% (213117)
[+] ?l?l?l?l?l?l?d: 03% (193110)
[+] ?l?l?l?l?l?l?l?d: 03% (189855)
[+] ?l?l?l?d?d?d?d: 03% (178308)
[+] ?l?l?l?l?l?d?d?d?d: 03% (173560)
[+] ?l?l?l?l?l?l?d?d?d?d: 03% (160596)
[+] ?l?l?l?l?l?l?l?l?d: 02% (160061)
[+] ?l?l?l?l?l?d?d?d: 02% (152406)
[+] ?l?l?l?l?l?l?d?d?d: 02% (132220)
[+] ?l?l?l?l?l?l?l?l?l?d: 02% (129833)
[+] ?l?l?l?l?l?d: 02% (114739)
[+] ?l?l?l?l?d?d?d: 02% (111221)
[+] ?l?l?d?d?d?d: 01% (98305)
[+] ?l?l?l?d?d?d: 01% (98189)
[+] ?l?l?l?l?l?l?l?d?d?d: 01% (87613)
[+] ?l?l?l?l?l?l?l?l?l?d?d: 01% (82655)
[+] ?l?l?l?l?l?l?l?d?d?d?d: 01% (70915)
[+] ?l?d?d?d?d?d?d: 01% (54888)
The very top of the output specifies what percentage of total passwords was analyzed. In this case, by cracking only passwords matching the "stringdigit" mask it is only possible to recover about 37% of the total set.
Next, it appears that only 11% of this password type use anything other than lowercase. So it would be smart to concentrate on only lowercase strings matching this mask. At last, in the "Advanced Mask Statistics" section we can see that the majority of "stringdigit" passwords consist of a string with two or four digits following it.
With the information gained from the above output, we can begin creating a mental image of target users' password generation patterns.
There are a few other filters available for password length, mask, and character sets:
Length: -l [integer]
**Mask:** -m [numeric, loweralpha, upperalpha, mixedalpha, loweralphanum, upperalphanum, mixedaphanum, special, loweralphaspecial, upperalphaspecial, mixedalphaspecial, loweraphaspecialnum, upperalphaspecialnum, mixedalphaspecialnum]
**Character sets:** -c [alldigit, allstring, stringdigit, digitstring, digitstringdigit, stringdigitstring, allspecial, stringspecial, specialstring, stringspecialstring, stringspecialdigit, specialstringspecial]
*DEVELOPERS: You can edit respective lists on the very top of the source files and add regular expressions for whatever mask or character set you can imagine.*
While the "Advanced Mask Section" only displays patterns matching greater than 1% of all passwords, you can obtain and save a full list of password masks matching a given dictionary by using the following command:
$ python statsgen.py -o rockyou.csv rockyou.txt
All of the password masks and their frequencies were saved into the specified file in CSV format. Naturally, you can provide filters to only generate masks file matching custom filters. The output file can be used as an input to MaskGen tool covered in the next section.
MaskGen
-----------
While analyzing passwords using `DictGen` can be both revealing and exciting, it is simply not feasible for larger data sets. MaskGen will analyze the masks output file produced by DictGen and help you generate optimal password mask collection for input to the Hashcat password cracker.
Let's run MaskGen with only DictGen's output as an argument:
$ python maskgen.py rockyou.csv
[*] [0] [11/14344391] [0.00] [0d|0h|0m|0s] ?
[*] [1] [49/14344391] [0.00] [0d|0h|0m|0s] ?s?u?l?d
[*] [2] [340/14344391] [0.00] [0d|0h|0m|0s] ?s?u?l?d ?s?u?l?d
[*] [3] [2479/14344391] [0.00] [0d|0h|0m|0s] ?s?u?l?d ?s?u?l?d ?s?u?l?d
[*] [4] [18015/14344391] [0.00] [0d|0h|0m|0s] ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d
[*] [5] [259174/14344391] [1.00] [0d|0h|0m|7s] ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d
[*] [6] [1947858/14344391] [13.00] [0d|0h|12m|735s] ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d
[*] [7] [2506264/14344391] [17.00] [0d|19h|1163m|69833s] ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d
[*] [8] [2966004/14344391] [20.00] [76d|1842h|110570m|6634204s] ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d
[*] [9] [2191000/14344391] [15.00] [7294d|175069h|10504156m|630249409s] ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d
...
[*] Coverage is %100 (14344391/14344391)
[*] Total time 1754409989919144353064355175042468812368733249495616893327070104
8211232752070650397369740269713581215815201614225211442387866496572012147724920
4649812938136306665865356654151858255534645195728557246448055491199506753532407
0837796021192337530275757511267739149674126051965467434111830202528596251368154
7242960405598417380173912831468583249522597950717975278703858956758951666222598
6032208513767643382460723235228659294580495141287225869341138204996227078055906
67374828225147228141265693827036d|421058397580594644735445242010192514968495979
8789480543984968251570695860496956095368737664731259491795648387414050746173087
9591772829154539809115955105152713599807685596996445981328314846974853739147533
3178878816208477777001071045086161007266181802704257395921790252471712184186839
2486068631003283571338310497343620171241739079552459979885423508172314066888926
1496221483998934236647730043304234411790573576454878230699318833908934208641873
1691990944987334176016995877403533475390376651848886h|2526350385483567868412671
4520611550898109758792736883263909809509424175162981736572212425988387556950773
8903244843044770385277550636974927238854695730630916281598846113581978675887969
8890818491224348851999073272897250866662006426270516966043597090816225544375530
7415148302731051210354916411786019701428029862984061721027450434477314759879312
5410490338844013335568977328903993605419886380259825406470743441458729269384195
91300345360525185123901519456699240050561019752644212008523422599110933198m|151
5810231290140721047602871236693053886585527564212995834588570565450509778904194
3327455593032534170464334194690582686223116653038218495634331281743837854976895
Ignore the crazy long last line for a minute and let's take a look at line with prefix of [5]. Here is how it can be interpreted:
[*] [5] [259174/14344391] [1.00] [0d|0h|0m|7s] ?s?u?l?d ?s?u?l?d ...
\ \ \ \ \
\ \ \ \ \ matching mask
\ \ \ \
\ \ \ \ time to crack
\ \ \
\ \ \ percent coverage from sample
\ \
\ \ total number of matching passwords
\
\ password length
NOTE: day, hour, minute, and seconds parameters are independent of each other. [0d|0h|1m|60s] means the total runtime is 60 seconds and not 1 minute 60 seconds. You will find it useful when doing calculations and converting back and forth.
The information contained in the above file will present you with an ordered list of masks together with how long it will take to crack all passwords matching this mask (default speed is 1,000,000,000 keys/sec), the percentage coverage of the total sample and total count of matching passwords.
In the above example "?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d ?s?u?l?d" mask shows that every single character in every position has the mask of "?s?u?l?d" or every single character. This line is suggesting that in order to crack every single password of length 5, you must attempt to crack entire character set for each position (or complete bruteforce) and it will take you about 7 seconds to do that.
NOTE: There is a bit of black magic going on in the background to generate masks for a specific password length. As you may have observed passwords become more complex by adding extra characters (exponential growth) as opposed to just increasing the character set in any given slot. As such, I have chosen to generate masks based on the length of passwords that they match. Combined masks are generated by looking at all masks of a given length and combining them together. For example masks ?l?l?l and ?l?l?d matching three character passwords will be combined to "?l ?l ?l?d" to match all passwords represented by both masks. This combined mask is less effective than taking each component mask separately; however, I will show you how to extract these components shortly.
The last (and really long) line specifies the total number of days/hours/minutes/seconds to crack every single password using every single mask in the database. The time value is such a huge number, because we are virtually performing a bruteforcing attack on all password lengths. What we are going to try to do is to optimize masks to crack the maximum number of password in minimum time.
You should almost never run MaskGen with no parameters (except to remind yourself why straight bruteforcing is bad). Let's use some of the data gained from StagsGen to generate a set of masks that will give us 50% of passwords within some reasonable time.
We have already collected statistical data about how many times each of the masks occurs, so let's leave out all of the infrequently occurring masks:
$ python maskgen.py --occurrence=10000 rockyou.csv
[*] [5] [220730/14344391] [1.00] [0d|0h|0m|0s] ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d
[*] [6] [1741132/14344391] [12.00] [0d|0h|1m|87s] ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?s?u?l?d
[*] [7] [2228900/14344391] [15.00] [0d|1h|89m|5396s] ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?s?u?l?d
[*] [8] [2591942/14344391] [18.00] [5d|142h|8543m|512622s] ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?s?u?l?d ?u?l?d ?s?u?l?d
[*] [9] [1857159/14344391] [12.00] [563d|13527h|811651m|48699101s] ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?s?u?l?d ?u?l?d ?s?u?l?d ?u?l?d ?s?u?l?d
[*] [10] [1623494/14344391] [11.00] [14884d|357228h|21433720m|1286023221s] ?u?d?l ?u?d?l ?u?d?l ?u?d?l ?u?d?l ?u?d?l ?u?d?l ?u?d?l ?u?d?l ?s?u?d?l
[*] [11] [634442/14344391] [4.00] [602275d|14454600h|867276011m|52036560683s] ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d
[*] [12] [362705/14344391] [2.00] [54842d|1316217h|78973022m|4738381338s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[*] [13] [205833/14344391] [1.00] [1974325d|47383813h|2843028802m|170581728179s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[*] [14] [133214/14344391] [0.00] [71075720d|1705817281h|102349036907m|6140942214464s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[*] [15] [55398/14344391] [0.00] [19412723d|465905372h|27954322371m|1677259342285s] ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l
[*] [16] [33484/14344391] [0.00] [504730820d|12113539694h|726812381657m|43608742899428s] ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l
[*] [17] [13147/14344391] [0.00] [13123001335d|314952032051h|18897121923085m|1133827315385150s] ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l ?l
[*] Coverage is %81 (11701580/14344391)
[*] Total time 13720867497d|329300819931h|19758049195865m|1185482951751954s
Using above masks it is possible to achieve significantly better cracking times while still preserving more than 80% total password coverage. For example, for passwords greater than 15 characters cracking character set can be reduced to only include lowercase alpha characters.
Let's take the above output to a much more reasonable time to satisfy our goal of cracking passwords in about a day with %50 coverage. For that, we will increase the frequency count as well as add maximum mask runtime parameter of one day or 8640 seconds:
$ python maskgen.py --occurrence=100000 --maxtime=8640 rockyou.csv
[*] [5] [125816/14344391] [0.00] [0d|0h|0m|0s] ?l ?l ?l ?l ?l
[*] [6] [1321621/14344391] [9.00] [0d|0h|0m|2s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[*] [7] [1847487/14344391] [12.00] [0d|0h|1m|78s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[*] [8] [2114310/14344391] [14.00] [0d|0h|47m|2821s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[*] [9] [1563883/14344391] [10.00] [1d|28h|1692m|101559s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[*] [10] [638820/14344391] [4.00] [0d|6h|362m|21767s] ?d?l ?d?l ?d?l ?d?l ?d?l ?d?l ?d ?d ?d ?d
[*] [11] [107864/14344391] [0.00] [0d|0h|1m|100s] ?d ?d ?d ?d ?d ?d ?d ?d ?d ?d ?d
[*] Coverage is %53 (7719801/14344391)
[*] Total time 1d|35h|2105m|126327s
We have almost reached the time requirement, but we can fine tune it by adding maximum password complexity. Password complexity is determined based on the number of all possible passwords matching the mask. For example, mask "?l?d ?l?d" can have up to (26+10)^2 or 1296 passwords. In our example, we are going to relax the occurrence flag, but include maximum password complexity of 2821109907456 which corresponds to an eight character loweralphanumeric password (26+10)^8. We are also going to include the --showmasks flag to see the exact component masks, their respective counts and relative percentages.
$ maskgen.py --occurrence=50000 --maxtime=8640 --complexity=2821109907456 --showmasks rockyou.csv
[*] [5] [125816/14344391] [0.00] [0d|0h|0m|0s] ?l ?l ?l ?l ?l
[5] [125816/125816] [100.00] [0.00] [0d|0h|0m|0s] ?l?l?l?l?l
[*] [6] [1569957/14344391] [10.00] [0d|0h|0m|56s] ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d ?u?l?d
[6] [601257/1569957] [38.00] [4.00] [0d|0h|0m|0s] ?l?l?l?l?l?l
[6] [390546/1569957] [24.00] [2.00] [0d|0h|0m|0s] ?d?d?d?d?d?d
[6] [215079/1569957] [13.00] [1.00] [0d|0h|0m|0s] ?l?l?l?l?d?d
[6] [114739/1569957] [7.00] [0.00] [0d|0h|0m|0s] ?l?l?l?l?l?d
[6] [98305/1569957] [6.00] [0.00] [0d|0h|0m|0s] ?l?l?d?d?d?d
[6] [98189/1569957] [6.00] [0.00] [0d|0h|0m|0s] ?l?l?l?d?d?d
[6] [51842/1569957] [3.00] [0.00] [0d|0h|0m|0s] ?u?u?u?u?u?u
[*] [7] [1902375/14344391] [13.00] [0d|0h|1m|78s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[7] [585093/1902375] [30.00] [4.00] [0d|0h|0m|8s] ?l?l?l?l?l?l?l
[7] [487437/1902375] [25.00] [3.00] [0d|0h|0m|0s] ?d?d?d?d?d?d?d
[7] [292318/1902375] [15.00] [2.00] [0d|0h|0m|1s] ?l?l?l?l?l?d?d
[7] [193110/1902375] [10.00] [1.00] [0d|0h|0m|3s] ?l?l?l?l?l?l?d
[7] [178308/1902375] [9.00] [1.00] [0d|0h|0m|0s] ?l?l?l?d?d?d?d
[7] [111221/1902375] [5.00] [0.00] [0d|0h|0m|0s] ?l?l?l?l?d?d?d
[7] [54888/1902375] [2.00] [0.00] [0d|0h|0m|0s] ?l?d?d?d?d?d?d
[*] [8] [2114310/14344391] [14.00] [0d|0h|47m|2821s] ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d ?l?d
[8] [688053/2114310] [32.00] [4.00] [0d|0h|3m|208s] ?l?l?l?l?l?l?l?l
[8] [428306/2114310] [20.00] [2.00] [0d|0h|0m|0s] ?d?d?d?d?d?d?d?d
[8] [420326/2114310] [19.00] [2.00] [0d|0h|0m|30s] ?l?l?l?l?l?l?d?d
[8] [235364/2114310] [11.00] [1.00] [0d|0h|0m|4s] ?l?l?l?l?d?d?d?d
[8] [189855/2114310] [8.00] [1.00] [0d|0h|1m|80s] ?l?l?l?l?l?l?l?d
[8] [152406/2114310] [7.00] [1.00] [0d|0h|0m|11s] ?l?l?l?l?l?d?d?d
[*] [9] [1047021/14344391] [7.00] [0d|7h|470m|28211s] ?d?l ?d?l ?d?l ?d?l ?d?l ?d?l ?d?l ?d?l ?d
[9] [307540/1047021] [29.00] [2.00] [0d|0h|0m|1s] ?d?d?d?d?d?d?d?d?d
[9] [273640/1047021] [26.00] [1.00] [0d|0h|13m|803s] ?l?l?l?l?l?l?l?d?d
[9] [173560/1047021] [16.00] [1.00] [0d|0h|1m|118s] ?l?l?l?l?l?d?d?d?d
[9] [160061/1047021] [15.00] [1.00] [0d|0h|34m|2088s] ?l?l?l?l?l?l?l?l?d
[9] [132220/1047021] [12.00] [0.00] [0d|0h|5m|308s] ?l?l?l?l?l?l?d?d?d
[*] [10] [478224/14344391] [3.00] [0d|0h|0m|10s] ?d ?d ?d ?d ?d ?d ?d ?d ?d ?d
[10] [478224/478224] [100.00] [3.00] [0d|0h|0m|10s] ?d?d?d?d?d?d?d?d?d?d
[*] [11] [107864/14344391] [0.00] [0d|0h|1m|100s] ?d ?d ?d ?d ?d ?d ?d ?d ?d ?d ?d
[11] [107864/107864] [100.00] [0.00] [0d|0h|1m|100s] ?d?d?d?d?d?d?d?d?d?d?d
[*] Coverage is %51 (7345567/14344391)
[*] Total time 0d|8h|521m|31276s
Aha! By only losing a few cracked passwords, we have significantly reduced the cracking time. There is a wealth of information that you can analyze, but with enough practice you should be able to achieve that perfect mask combination representative of your target.
There are a few additional useful parameters which I did not yet cover:
--pps - Exact passwords per second speed of a sample setup (used for statistical calculations)
--minlength and --maxlength - Define minimum and maximum password lengths
--checkmask - Checks how many times a particular mask appears in the sample
For example, let's find out how well "?l ?l ?l ?l ?l ?l ?l?d ?l?d" mask will perform in the sample:
$ python maskgen.py --checkmask="?l ?l ?l ?l ?l ?l ?l?d ?l?d" --showmasks rockyou.csv
[*] [8] [1305708/14344391] [9.00] [0d|0h|6m|400s] ?l ?l ?l ?l ?l ?l ?l?d ?l?d
[8] [688053/1305708] [52.00] [4.00] [0d|0h|3m|208s] ?l?l?l?l?l?l?l?l
[8] [420326/1305708] [32.00] [2.00] [0d|0h|0m|30s] ?l?l?l?l?l?l?d?d
[8] [189855/1305708] [14.00] [1.00] [0d|0h|1m|80s] ?l?l?l?l?l?l?l?d
[8] [7474/1305708] [0.00] [0.00] [0d|0h|1m|80s] ?l?l?l?l?l?l?d?l
[*] Coverage is %9 (1305708/14344391)
[*] Total time 0d|0h|6m|400s
The above output, will tell you that this mask matches only 9 percent of passwords and will take only 6 minutes of cracking time.
PolicyGen
--------------
A lot of the dictionary attacks will fail in the corporate environment with minimum password complexity rules. Instead of performing a pure bruteforcing attack, we can leverage known password complexity rules to avoid trying password candidates that are not compliant with the policy (e.g. ?l?l?l?l?l?l?l?l when at least one digit is required). Using PolicyGen, you will be able to generate a list of valid policy compliant (or intentionally non-compliant) rules that can be used to significantly decrease the cracking time. Below is a sample session where we generate all valid password masks for the environment requiring at least one digit, one upper, and one special characters.
$ python policygen.py --output=masks.txt --mindigit=1 --minupper=1 --minspecial=1 --length 8 --pps=40000000000
[*] Password policy:
[+] Password length: 8
[+] Minimum strength: lower: 0, upper: 1, digits: 1, special: 1
[+] Maximum strength: lower: 8, upper: 8, digits: 8, special: 8
[*] Total Masks: 65536 Runtime: [1d|38h|2324m|139463s]
[*] Policy Masks: 46620 Runtime: [0d|18h|1097m|65828s]
From the output above, the total bruteforce runtime for an eight character password is 38 hours. However, when cracking passwords only matching a specific password policy, we can reduce this time to only 18 hours!
Here is a snippet of the 'masks.txt' generated by the above command:
?l?l?l?l?l?u?d?s
?l?l?l?l?l?u?s?d
?l?l?l?l?l?d?u?s
?l?l?l?l?l?d?s?u
?l?l?l?l?l?s?u?d
?l?l?l?l?l?s?d?u
?l?l?l?l?u?l?d?s
?l?l?l?l?u?l?s?d
?l?l?l?l?u?u?d?s
?l?l?l?l?u?u?s?d
...
All of the above masks, will contain at least one digit, one upper and one special characters.
Running PolicyGen in verbose mode to see the exact character and complexity statistics for each mask:
$ python policygen.py --output=masks.txt --mindigit=1 --minupper=1 --minspecial=1 --length 8 --pps=40000000000 --verbose | head
[*] Password policy:
[+] Password length: 8
[+] Minimum strength: lower: 0, upper: 1, digits: 1, special: 1
[+] Maximum strength: lower: 8, upper: 8, digits: 8, special: 8
[*] [0d|0h|0m|2s] ?l?l?l?l?l?u?d?s [l:5 u:1 d:1 s:1]
[*] [0d|0h|0m|2s] ?l?l?l?l?l?u?s?d [l:5 u:1 d:1 s:1]
[*] [0d|0h|0m|2s] ?l?l?l?l?l?d?u?s [l:5 u:1 d:1 s:1]
[*] [0d|0h|0m|2s] ?l?l?l?l?l?d?s?u [l:5 u:1 d:1 s:1]
[*] [0d|0h|0m|2s] ?l?l?l?l?l?s?u?d [l:5 u:1 d:1 s:1]
[*] [0d|0h|0m|2s] ?l?l?l?l?l?s?d?u [l:5 u:1 d:1 s:1]
...
NOTE: You can also use this program to test for compliance of existing passwords to the defined minimum password complexity policy. Simply reverse --mindigit to --maxdigit parameters in order to generate masks matching non-compliant passwords.
Conclusion
==============
While this guide introduces a number of methods to analyze passwords, reverse rules and generate masks, there are a number of other tricks that are waiting for you to discover. I would be excited if you told me about some unusual use or suggestions for any of the covered tools.
Happy Cracking!
-Peter

238
maskgen.py Executable file
View File

@ -0,0 +1,238 @@
#!/usr/bin/python
# MaskGen - Generate Password Masks
#
# This tool is part of PACK (Password Analysis and Cracking Kit)
#
# VERSION 0.0.2
#
# Copyright (C) 2013 Peter Kacherginsky
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import csv, string
from operator import itemgetter
from optparse import OptionParser
VERSION = "0.0.2"
# PPS (Passwords per Second) Cracking Speed
pps = 1000000000
# Global Variables
mastermasks = dict()
allmasks = dict()
##################################################################
## Calculate complexity of a single mask
##################################################################
def complexity(mask):
count = 1
for char in mask[1:].split("?"):
if char == "l": count *= 26
elif char == "u": count *= 26
elif char == "d": count *= 10
elif char == "s": count *= 33
else: "[!] Error, unknown mask ?%s" % char
return count
###################################################################
## Calculate complexity of a complex mask
###################################################################
def maskcomplexity(mask):
complexity = 1
for submask in mask.split(" "):
permutations = 0
for char in submask[1:].split("?"):
if char == "l": permutations += 26
elif char == "u": permutations += 26
elif char == "d": permutations += 10
elif char == "s": permutations += 33
else: "[!] Error, unknown mask ?%s" % char
if permutations: complexity *= permutations
return complexity
###################################################################
## Check if complex mask matches a sample mask
###################################################################
def matchmask(checkmask,mask):
length = len(mask)/2
checklength = len(checkmask.split(" "))
if length == checklength:
masklist = mask[1:].split("?")
for i, submask in enumerate(checkmask.split(" ")):
for char in submask[1:].split("?"):
if char == masklist[i]:
break
else:
return False
else:
return True
else:
return False
###################################################################
## Combine masks
###################################################################
def genmask(mask):
global mastermasks
length = len(mask)/2
try:
lengthmask = mastermasks[length]
except:
mastermasks[length] = dict()
lengthmask = mastermasks[length]
for i,v in enumerate(mask[1:].split("?")):
try:
positionmask = lengthmask[i]
except:
lengthmask[i] = set()
positionmask = lengthmask[i]
positionmask.add("?%s" % v)
###################################################################
## Store all masks in based on length and count
###################################################################
def storemask(mask,occurrence):
global allmasks
length = len(mask)/2
#print "Storing mask %s" % mask
try:
lengthmask = allmasks[length]
except:
allmasks[length] = dict()
lengthmask = allmasks[length]
lengthmask[mask] = int(occurrence)
def main():
# Constants
total_occurrence = 0
sample_occurrence = 0
sample_time = 0
# TODO: I want to actually see statistical analysis of masks not just based on size but also frequency and time
# per length and per count
header = " _ \n"
header += " MaskGen 0.0.2 | |\n"
header += " _ __ __ _ ___| | _\n"
header += " | '_ \ / _` |/ __| |/ /\n"
header += " | |_) | (_| | (__| < \n"
header += " | .__/ \__,_|\___|_|\_\\\n"
header += " | | \n"
header += " |_| iphelix@thesprawl.org\n"
header += "\n"
parser = OptionParser("%prog [options] masksfile.csv", version="%prog "+VERSION)
parser.add_option("--minlength", dest="minlength",help="Minimum password length", type="int", metavar="8")
parser.add_option("--maxlength", dest="maxlength",help="Maximum password length", type="int", metavar="8")
parser.add_option("--mintime", dest="mintime",help="Minimum time to crack", type="int", metavar="")
parser.add_option("--maxtime", dest="maxtime",help="Maximum time to crack", type="int", metavar="")
parser.add_option("--complexity", dest="complexity",help="maximum password complexity", type="int", metavar="")
parser.add_option("--occurrence", dest="occurrence",help="minimum times mask was used", type="int", metavar="")
parser.add_option("--checkmask", dest="checkmask",help="check mask coverage", metavar="?u?l ?l ?l ?l ?l ?d")
parser.add_option("--showmasks", dest="showmasks",help="Show matching masks", action="store_true", default=False)
parser.add_option("--pps", dest="pps",help="Passwords per Second", type="int", default=pps, metavar="1000000000")
parser.add_option("-q", "--quiet", action="store_true", dest="quiet", default=False, help="Don't show headers.")
(options, args) = parser.parse_args()
# Print program header
if not options.quiet:
print header
if len(args) != 1:
parser.error("no masks file specified")
exit(1)
print "[*] Analysing masks: %s" % args[0]
maskReader = csv.reader(open(args[0],'r'), delimiter=',', quotechar='"')
#headerline = maskReader.next()
# Check the coverage of a particular mask for a given set
if options.checkmask:
length = len(options.checkmask.split(" "))
# Prepare master mask list for analysis
mastermasks[length] = dict()
lengthmask = mastermasks[length]
for i, submask in enumerate(options.checkmask.split(" ")):
lengthmask[i] = set()
positionmask = lengthmask[i]
for char in submask[1:].split("?"):
positionmask.add("?%s" % char)
for (mask,occurrence) in maskReader:
total_occurrence += int(occurrence)
if matchmask(options.checkmask,mask):
sample_occurrence += int(occurrence)
storemask(mask,occurrence)
# Generate masks from a given set
else:
for (mask,occurrence) in maskReader:
total_occurrence += int(occurrence)
if (not options.occurrence or int(occurrence) >= options.occurrence) and \
(not options.maxlength or len(mask)/2 <= options.maxlength) and \
(not options.minlength or len(mask)/2 >= options.minlength) and \
(not options.complexity or complexity(mask) <= options.complexity) and \
(not options.maxtime or complexity(mask)/options.pps <= options.maxtime) and \
(not options.mintime or complexity(mask)/options.pps >= options.mintime):
genmask(mask)
storemask(mask,occurrence)
sample_occurrence += int(occurrence)
####################################################################################
## Analysis
####################################################################################
for length,lengthmask in sorted(mastermasks.iteritems()):
maskstring = ""
for position,maskset in lengthmask.iteritems(): maskstring += "%s " % string.join(maskset,"")
mask_time = maskcomplexity(maskstring)/options.pps
sample_time += mask_time
length_occurrence = 0
for mask, occurrence in allmasks[length].iteritems():
length_occurrence += int(occurrence)
print "[*] [%d] [%d/%d] [%.02f] [%dd|%dh|%dm|%ds] %s" % (length, length_occurrence, total_occurrence, length_occurrence*100/total_occurrence, mask_time/60/60/24, mask_time/60/60, mask_time/60, mask_time,maskstring)
if options.showmasks:
for mask,mask_occurrence in sorted(allmasks[length].iteritems(),key=itemgetter(1),reverse=True):
mask_time = complexity(mask)/options.pps
print " [%d] [%d/%d] [%.02f] [%.02f] [%dd|%dh|%dm|%ds] %s" % (length, mask_occurrence, length_occurrence, mask_occurrence*100/length_occurrence, mask_occurrence*100/total_occurrence,mask_time/60/60/24, mask_time/60/60, mask_time/60, mask_time,mask)
print "[*] Coverage is %%%d (%d/%d)" % (sample_occurrence*100/total_occurrence,sample_occurrence,total_occurrence)
print "[*] Total time [%dd|%dh|%dm|%ds]" % (sample_time/60/60/24,sample_time/60/60,sample_time/60,sample_time)
if __name__ == "__main__":
main()

158
policygen.py Executable file
View File

@ -0,0 +1,158 @@
#!/usr/bin/python
# PolicyGen - Analyze and Generate password masks according to a password policy
#
# This tool is part of PACK (Password Analysis and Cracking Kit)
#
# VERSION 0.0.2
#
# Copyright (C) 2013 Peter Kacherginsky
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import string, random
from optparse import OptionParser, OptionGroup
import itertools
VERSION = "0.0.1"
# PPS (Passwords per Second) Cracking Speed
pps = 1000000000
# Global Variables
sample_time = 0
total_time = 0
##################################################################
# Calculate complexity of a single mask
##################################################################
def complexity(mask):
count = 1
for char in mask[1:].split("?"):
if char == "l": count *= 26
elif char == "u": count *= 26
elif char == "d": count *= 10
elif char == "s": count *= 33
else: "[!] Error, unknown mask ?%s" % char
return count
###################################################################
# Check whether a sample password mask matches defined policy
###################################################################
def filtermask(maskstring,options):
global total_time, sample_time
# define character counters
lowercount = uppercount = digitcount = specialcount = 0
# calculate password complexity and cracking time
mask_time = complexity(maskstring)/options.pps
total_time += mask_time
for char in maskstring[1:].split("?"):
if char == "l": lowercount += 1
elif char == "u": uppercount += 1
elif char == "d": digitcount += 1
elif char == "s": specialcount += 1
# Filter according to password policy
if lowercount >= options.minlower and lowercount <= options.maxlower and \
uppercount >= options.minupper and uppercount <= options.maxupper and \
digitcount >= options.mindigits and digitcount <= options.maxdigits and \
specialcount >= options.minspecial and specialcount <= options.maxspecial:
sample_time += mask_time
if options.verbose:
print "[*] [%dd|%dh|%dm|%ds] %s [l:%d u:%d d:%d s:%d]" % (mask_time/60/60/24, mask_time/60/60, mask_time/60, mask_time,maskstring,lowercount,uppercount,digitcount,specialcount)
return True
else:
return False
def main():
# define mask counters
total_count = sample_count = 0
header = " _ \n"
header += " PolicyGen 0.0.1 | |\n"
header += " _ __ __ _ ___| | _\n"
header += " | '_ \ / _` |/ __| |/ /\n"
header += " | |_) | (_| | (__| < \n"
header += " | .__/ \__,_|\___|_|\_\\\n"
header += " | | \n"
header += " |_| iphelix@thesprawl.org\n"
header += "\n"
# parse command line arguments
parser = OptionParser("%prog [options]\n\nType --help for more options", version="%prog "+VERSION)
parser.add_option("--length", dest="length", help="Password length", type="int", default=8, metavar="8")
parser.add_option("-o", "--output", dest="output",help="Save masks to a file", metavar="masks.txt")
parser.add_option("--pps", dest="pps", help="Passwords per Second", type="int", default=pps, metavar="1000000000")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose")
group = OptionGroup(parser, "Password Policy", "Define the minimum (or maximum) password strength policy that you would like to test")
group.add_option("--mindigits", dest="mindigits", help="Minimum number of digits", default=0, type="int", metavar="1")
group.add_option("--minlower", dest="minlower", help="Minimum number of lower-case characters", default=0, type="int", metavar="1")
group.add_option("--minupper", dest="minupper", help="Minimum number of upper-case characters", default=0, type="int", metavar="1")
group.add_option("--minspecial", dest="minspecial", help="Minimum number of special characters", default=0, type="int", metavar="1")
group.add_option("--maxdigits", dest="maxdigits", help="Maximum number of digits", default=9999, type="int", metavar="3")
group.add_option("--maxlower", dest="maxlower", help="Maximum number of lower-case characters", default=9999, type="int", metavar="3")
group.add_option("--maxupper", dest="maxupper", help="Maximum number of upper-case characters", default=9999, type="int", metavar="3")
group.add_option("--maxspecial", dest="maxspecial", help="Maximum number of special characters", default=9999, type="int", metavar="3")
parser.add_option("-q", "--quiet", action="store_true", dest="quiet", default=False, help="Don't show headers.")
parser.add_option_group(group)
(options, args) = parser.parse_args()
# cleanup maximum occurence options
if options.maxlower > options.length: options.maxlower = options.length
if options.maxdigits > options.length: options.maxdigits = options.length
if options.mindigits > options.length: options.mindigits = options.length
if options.maxupper > options.length: options.maxupper = options.length
if options.maxspecial > options.length: options.maxspecial = options.length
# Print program header
if not options.quiet:
print header
# print current password policy
print "[*] Password policy:"
print "[+] Password length: %d" % options.length
print "[+] Minimum strength: lower: %d, upper: %d, digits: %d, special: %d" % (options.minlower, options.minupper, options.mindigits, options.minspecial)
print "[+] Maximum strength: lower: %d, upper: %d, digits: %d, special: %d" % (options.maxlower, options.maxupper, options.maxdigits, options.maxspecial)
if options.output: f = open(options.output, 'w')
# generate all possible password masks and compare them to policy
# TODO: Randomize or even statistically arrange matching masks
for password in itertools.product(['?l','?u','?d','?s'],repeat=options.length):
if filtermask(''.join(password), options):
if options.output: f.write("%s\n" % ''.join(password))
sample_count +=1
total_count += 1
if options.output: f.close()
print "[*] Total Masks: %d Runtime: [%dd|%dh|%dm|%ds]" % (total_count, total_time/60/60/24, total_time/60/60, total_time/60, total_time)
print "[*] Policy Masks: %d Runtime: [%dd|%dh|%dm|%ds]" % (sample_count, sample_time/60/60/24, sample_time/60/60, sample_time/60, sample_time)
if __name__ == "__main__":
main()

925
rulegen.py Executable file
View File

@ -0,0 +1,925 @@
#!/usr/bin/env python
# Rulegen.py - Advanced automated password rule and wordlist generator for the
# Hashcat password cracker using the Levenshtein Reverse Path
# algorithm and Enchant spell checking library.
#
# This tool is part of PACK (Password Analysis and Cracking Kit)
#
# VERSION 0.0.1
#
# Copyright (C) 2013 Peter Kacherginsky
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# CHANGELOG:
# [*] Fixed greedy substitution issue (thanks smarteam)
import sys
import re
import time
import operator
import enchant
from optparse import OptionParser, OptionGroup
VERSION = "0.0.1"
# Testing rules with hashcat --stdout
import subprocess
HASHCAT_PATH = "hashcat-0.42/"
# Rule Generator class responsible for the complete cycle of rule generation
class RuleGen:
# Initialize Rule Generator class
def __init__(self,language="en",providers="aspell,myspell",basename='analysis'):
self.enchant_broker = enchant.Broker()
self.enchant_broker.set_ordering("*",providers)
self.enchant = enchant.Dict(language, self.enchant_broker)
# Output options
self.basename = basename
self.output_rules_f = open("%s.rule" % basename, 'w')
self.output_words_f = open("%s.word" % basename, 'w')
# Finetuning word generation
self.max_word_dist = 10
self.max_words = 10
self.more_words = False
self.simple_words = False
# Finetuning rule generation
self.max_rule_len = 10
self.max_rules = 10
self.more_rules = False
self.simple_rules = False
# Debugging options
self.verbose = False
self.debug = False
self.word = None
self.quiet = False
########################################################################
# Word and Rule Statistics
self.word_stats = dict()
self.rule_stats = dict()
self.password_stats = dict()
self.numeric_stats_total = 0
self.special_stats_total = 0
self.foreign_stats_total = 0
########################################################################
# Preanalysis Password Patterns
self.password_pattern = dict()
self.password_pattern["insertion"] = re.compile('^[^a-z]*(?P<password>.+?)[^a-z]*$', re.IGNORECASE)
self.password_pattern["email"] = re.compile('^(?P<password>.+?)@[A-Z0-9.-]+\.[A-Z]{2,4}', re.IGNORECASE)
self.password_pattern["alldigits"] = re.compile('^(\d+)$', re.IGNORECASE)
self.password_pattern["allspecial"]= re.compile('^([^a-z0-9]+)$', re.IGNORECASE)
########################################################################
# Hashcat Rules Engine
self.hashcat_rule = dict()
# Dummy rule
self.hashcat_rule[':'] = lambda x: x # Do nothing
# Case rules
self.hashcat_rule["l"] = lambda x: x.lower() # Lowercase all letters
self.hashcat_rule["u"] = lambda x: x.upper() # Capitalize all letters
self.hashcat_rule["c"] = lambda x: x.capitalize() # Capitalize the first letter
self.hashcat_rule["C"] = lambda x: x[0].lower() + x[1:].upper() # Lowercase the first found character, uppercase the rest
self.hashcat_rule["t"] = lambda x: x.swapcase() # Toggle the case of all characters in word
self.hashcat_rule["T"] = lambda x,y: x[:y] + x[y].swapcase() + x[y+1:] # Toggle the case of characters at position N
self.hashcat_rule["E"] = lambda x: " ".join([i[0].upper()+i[1:] for i in x.split(" ")]) # Upper case the first letter and every letter after a space
# Rotation rules
self.hashcat_rule["r"] = lambda x: x[::-1] # Reverse the entire word
self.hashcat_rule["{"] = lambda x: x[1:]+x[0] # Rotate the word left
self.hashcat_rule["}"] = lambda x: x[-1]+x[:-1] # Rotate the word right
# Duplication rules
self.hashcat_rule["d"] = lambda x: x+x # Duplicate entire word
self.hashcat_rule["p"] = lambda x,y: x*y # Duplicate entire word N times
self.hashcat_rule["f"] = lambda x: x+x[::-1] # Duplicate word reversed
self.hashcat_rule["z"] = lambda x,y: x[0]*y+x # Duplicate first character N times
self.hashcat_rule["Z"] = lambda x,y: x+x[-1]*y # Duplicate last character N times
self.hashcat_rule["q"] = lambda x: "".join([i+i for i in x]) # Duplicate every character
self.hashcat_rule["y"] = lambda x,y: x[:y]+x # Duplicate first N characters
self.hashcat_rule["Y"] = lambda x,y: x+x[-y:] # Duplicate last N characters
# Cutting rules
self.hashcat_rule["["] = lambda x: x[1:] # Delete first character
self.hashcat_rule["]"] = lambda x: x[:-1] # Delete last character
self.hashcat_rule["D"] = lambda x,y: x[:y]+x[y+1:] # Deletes character at position N
self.hashcat_rule["'"] = lambda x,y: x[:y] # Truncate word at position N
self.hashcat_rule["x"] = lambda x,y,z: x[:y]+x[y+z:] # Delete M characters, starting at position N
self.hashcat_rule["@"] = lambda x,y: x.replace(y,'') # Purge all instances of X
# Insertion rules
self.hashcat_rule["$"] = lambda x,y: x+y # Append character to end
self.hashcat_rule["^"] = lambda x,y: y+x # Prepend character to front
self.hashcat_rule["i"] = lambda x,y,z: x[:y]+z+x[y:] # Insert character X at position N
# Replacement rules
self.hashcat_rule["o"] = lambda x,y,z: x[:y]+z+x[y+1:] # Overwrite character at position N with X
self.hashcat_rule["s"] = lambda x,y,z: x.replace(y,z) # Replace all instances of X with Y
self.hashcat_rule["L"] = lambda x,y: x[:y]+chr(ord(x[y])<<1)+x[y+1:] # Bitwise shift left character @ N
self.hashcat_rule["R"] = lambda x,y: x[:y]+chr(ord(x[y])>>1)+x[y+1:] # Bitwise shift right character @ N
self.hashcat_rule["+"] = lambda x,y: x[:y]+chr(ord(x[y])+1)+x[y+1:] # Increment character @ N by 1 ascii value
self.hashcat_rule["-"] = lambda x,y: x[:y]+chr(ord(x[y])-1)+x[y+1:] # Decrement character @ N by 1 ascii value
self.hashcat_rule["."] = lambda x,y: x[:y]+x[y+1]+x[y+1:] # Replace character @ N with value at @ N plus 1
self.hashcat_rule[","] = lambda x,y: x[:y]+x[y-1]+x[y+1:] # Replace character @ N with value at @ N minus 1
# Swappping rules
self.hashcat_rule["k"] = lambda x: x[1]+x[0]+x[2:] # Swap first two characters
self.hashcat_rule["K"] = lambda x: x[:-2]+x[-1]+x[-2] # Swap last two characters
self.hashcat_rule["*"] = lambda x,y,z: x[:y]+x[z]+x[y+1:z]+x[y]+x[z+1:] if z > y else x[:z]+x[y]+x[z+1:y]+x[z]+x[y+1:] # Swap character X with Y
########################################################################
# Common numeric and special character substitutions (1337 5p34k)
self.leet = dict()
self.leet["1"] = "i"
self.leet["2"] = "z"
self.leet["3"] = "e"
self.leet["4"] = "a"
self.leet["5"] = "s"
self.leet["6"] = "b"
self.leet["7"] = "t"
self.leet["8"] = "b"
self.leet["9"] = "g"
self.leet["0"] = "o"
self.leet["!"] = "i"
self.leet["|"] = "i"
self.leet["@"] = "a"
self.leet["$"] = "s"
self.leet["+"] = "t"
############################################################################
# Calculate Levenshtein edit path matrix
def levenshtein(self,word,password):
matrix = []
# Generate and populate the initial matrix
for i in xrange(len(password) + 1):
matrix.append([])
for j in xrange(len(word) + 1):
if i == 0:
matrix[i].append(j)
elif j == 0:
matrix[i].append(i)
else:
matrix[i].append(0)
# Calculate edit distance for each substring
for i in xrange(1,len(password) + 1):
for j in xrange(1,len(word) + 1):
if password[i-1] == word[j-1]:
matrix[i][j] = matrix[i-1][j-1]
else:
insertion = matrix[i-1][j] + 1
deletion = matrix[i][j-1] + 1
substitution = matrix[i-1][j-1] + 1
matrix[i][j] = min(insertion, deletion, substitution)
return matrix
############################################################################
# Print word X password matrix
def levenshtein_print(self,matrix,word,password):
print " %s" % " ".join(list(word))
for i,row in enumerate(matrix):
if i == 0: print " ",
else: print password[i-1],
print " ".join("%2d" % col for col in row)
############################################################################
# Reverse Levenshtein Path Algorithm by Peter Kacherginsky
# Generates a list of edit operations necessary to transform a source word
# into a password. Edit operations are recorded in the form:
# (operation, password_offset, word_offset)
# Where an operation can be either insertion, deletion or replacement.
def levenshtein_reverse_path(self,matrix,word,password):
paths = self.levenshtein_reverse_recursive(matrix,len(matrix)-1,len(matrix[0])-1,0)
return [path for path in paths if len(path) <= matrix[-1][-1]]
# Calculate reverse Levenshtein paths (recursive, depth first, short-circuited)
def levenshtein_reverse_recursive(self,matrix,i,j,path_len):
if i == 0 and j == 0 or path_len > matrix[-1][-1]:
return [[]]
else:
paths = list()
cost = matrix[i][j]
# Calculate minimum cost of each operation
cost_delete = cost_insert = cost_equal_or_replace = sys.maxint
if i > 0: cost_insert = matrix[i-1][j]
if j > 0: cost_delete = matrix[i][j-1]
if i > 0 and j > 0: cost_equal_or_replace = matrix[i-1][j-1]
cost_min = min(cost_delete, cost_insert, cost_equal_or_replace)
# Recurse through reverse path for each operation
if cost_insert == cost_min:
insert_paths = self.levenshtein_reverse_recursive(matrix,i-1,j,path_len+1)
for insert_path in insert_paths: paths.append(insert_path + [('insert',i-1,j)])
if cost_delete == cost_min:
delete_paths = self.levenshtein_reverse_recursive(matrix,i,j-1,path_len+1)
for delete_path in delete_paths: paths.append(delete_path + [('delete',i,j-1)])
if cost_equal_or_replace == cost_min:
if cost_equal_or_replace == cost:
equal_paths = self.levenshtein_reverse_recursive(matrix,i-1,j-1,path_len)
for equal_path in equal_paths: paths.append(equal_path)
else:
replace_paths = self.levenshtein_reverse_recursive(matrix,i-1,j-1,path_len+1)
for replace_path in replace_paths: paths.append(replace_path + [('replace',i-1,j-1)])
return paths
############################################################################
def load_custom_wordlist(self,wordlist_file):
self.enchant = enchant.request_pwl_dict(wordlist_file)
############################################################################
# Generate source words
def generate_words_collection(self,password):
if self.debug: print "[*] Generating source words for %s" % password
words = []
if not self.simple_words: suggestions = self.generate_advanced_words(password)
else: suggestions = self.generate_simple_words(password)
best_found_distance = sys.maxint
unique_suggestions = []
for word in suggestions:
word = word.replace(' ','')
word = word.replace('-','')
if not word in unique_suggestions:
unique_suggestions.append(word)
# NOTE: Enchant already returned a list sorted by edit distance, so
# we simply need to get the best edit distance of the first word
# and compare the rest with it
for word in unique_suggestions:
matrix = self.levenshtein(word,password)
edit_distance = matrix[-1][-1]
# Record best edit distance and skip anything exceeding it
if not self.more_words:
if edit_distance < best_found_distance:
best_found_distance = edit_distance
elif edit_distance > best_found_distance:
if self.verbose: print "[-] %s => {best distance exceeded: %d (%d)} => %s" % (word,edit_distance,best_found_distance,password)
break
if edit_distance <= self.max_word_dist:
if self.debug: print "[+] %s => {edit distance: %d (%d)} = > %s" % (word,edit_distance,best_found_distance,password)
words.append((word,matrix,password))
if not word in self.word_stats: self.word_stats[word] = 1
else: self.word_stats[word] += 1
else:
if self.verbose: print "[-] %s => {max distance exceeded: %d (%d)} => %s" % (word,edit_distance,self.max_word_dist,password)
return words
############################################################################
# Generate simple words
def generate_simple_words(self,password):
if self.word:
return [self.word]
else:
return self.enchant.suggest(password)[:self.max_words]
############################################################################
# Generate advanced words
def generate_advanced_words(self,password):
if self.word:
return [self.word]
else:
# Remove non-alpha prefix and appendix
insertion_matches = self.password_pattern["insertion"].match(password)
if insertion_matches:
password = insertion_matches.group('password')
# Email split
email_matches = self.password_pattern["email"].match(password)
if email_matches:
password = email_matches.group('password')
# Replace common special character replacements (1337 5p34k)
preanalysis_password = ''
for c in password:
if c in self.leet: preanalysis_password += self.leet[c]
else: preanalysis_password += c
password = preanalysis_password
if self.debug: "[*] Preanalysis Password: %s" % password
return self.enchant.suggest(password)[:self.max_words]
############################################################################
# Hashcat specific offset definition 0-9,A-Z
def int_to_hashcat(self,N):
if N < 10: return N
else: return chr(65+N-10)
def hashcat_to_int(self,N):
if N.isdigit(): return int(N)
else: return ord(N)-65+10
############################################################################
# Generate hashcat rules
def generate_hashcat_rules_collection(self, lev_rules_collection):
hashcat_rules_collection = []
min_hashcat_rules_length = sys.maxint
for (word,rules,password) in lev_rules_collection:
if self.simple_rules:
hashcat_rules = self.generate_simple_hashcat_rules(word,rules,password)
else:
hashcat_rules = self.generate_advanced_hashcat_rules(word,rules,password)
if not hashcat_rules == None:
hashcat_rules_length = len(hashcat_rules)
if hashcat_rules_length <= self.max_rule_len:
hashcat_rules_collection.append((word,hashcat_rules,password))
# Determine minimal hashcat rules length
if hashcat_rules_length < min_hashcat_rules_length:
min_hashcat_rules_length = hashcat_rules_length
else:
if self.verbose: print "[!] %s => {max rule length exceeded: %d (%d)} => %s" % (word,hashcat_rules_length,self.max_rule_len,password)
else:
print "[!] Processing FAILED: %s => ;( => %s" % (word,password)
print " Sorry about that, please report this failure to"
print " the developer: iphelix [at] thesprawl.org"
# Remove suboptimal rules
if not self.more_rules:
min_hashcat_rules_collection = []
for (word,hashcat_rules,password) in hashcat_rules_collection:
hashcat_rules_length = len(hashcat_rules)
if hashcat_rules_length == min_hashcat_rules_length:
min_hashcat_rules_collection.append((word,hashcat_rules,password))
else:
if self.verbose: print "[!] %s => {rule length suboptimal: %d (%d)} => %s" % (word,hashcat_rules_length,min_hashcat_rules_length,password)
hashcat_rules_collection = min_hashcat_rules_collection
return hashcat_rules_collection
############################################################################
# Generate basic hashcat rules using only basic insert,delete,replace rules
def generate_simple_hashcat_rules(self,word,rules,password):
hashcat_rules = []
if self.debug: print "[*] Simple Processing %s => %s" % (word,password)
# Dynamically apply rules to the source word
# NOTE: Special case were word == password this would work as well.
word_rules = word
for (op,p,w) in rules:
if self.debug: print "\t[*] Simple Processing Started: %s - %s" % (word_rules, " ".join(hashcat_rules))
if op == 'insert':
hashcat_rules.append("i%s%s" % (self.int_to_hashcat(p),password[p]))
word_rules = self.hashcat_rule['i'](word_rules,p,password[p])
elif op == 'delete':
hashcat_rules.append("D%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['D'](word_rules,p)
elif op == 'replace':
hashcat_rules.append("o%s%s" % (self.int_to_hashcat(p),password[p]))
word_rules = self.hashcat_rule['o'](word_rules,p,password[p])
if self.debug: print "\t[*] Simple Processing Ended: %s => %s => %s" % (word_rules, " ".join(hashcat_rules),password)
# Check if rules result in the correct password
if word_rules == password:
return hashcat_rules
else:
if self.debug: print "[!] Simple Processing FAILED: %s => %s => %s (%s)" % (word," ".join(hashcat_rules),password,word_rules)
return None
############################################################################
# Generate advanced hashcat rules using full range of available rules
def generate_advanced_hashcat_rules(self,word,rules,password):
hashcat_rules = []
if self.debug: print "[*] Advanced Processing %s => %s" % (word,password)
# Dynamically apply and store rules in word_rules variable.
# NOTE: Special case where word == password this would work as well.
word_rules = word
# Generate case statistics
password_lower = len([c for c in password if c.islower()])
password_upper = len([c for c in password if c.isupper()])
for i,(op,p,w) in enumerate(rules):
if self.debug: print "\t[*] Advanced Processing Started: %s - %s" % (word_rules, " ".join(hashcat_rules))
if op == 'insert':
hashcat_rules.append("i%s%s" % (self.int_to_hashcat(p),password[p]))
word_rules = self.hashcat_rule['i'](word_rules,p,password[p])
elif op == 'delete':
hashcat_rules.append("D%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['D'](word_rules,p)
elif op == 'replace':
# Detecting global replacement such as sXY, l, u, C, c is a non
# trivial problem because different characters may be added or
# removed from the word by other rules. A reliable way to solve
# this problem is to apply all of the rules the the source word
# and keep track of its state at any given time. At the same
# time, global replacement rules can be tested by completing
# the rest of the rules using a simplified engine.
# The sequence of if statements determines the priority of rules
# This rule was made obsolete by a prior global replacement
if word_rules[p] == password[p]:
if self.debug: print "\t[*] Advanced Processing Obsolete Rule: %s - %s" % (word_rules, " ".join(hashcat_rules))
# Swapping rules
elif p < len(password)-1 and p < len(word_rules)-1 and word_rules[p] == password[p+1] and word_rules[p+1] == password[p]:
# Swap first two characters
if p == 0 and self.generate_simple_hashcat_rules( self.hashcat_rule['k'](word_rules), rules[i+1:],password):
hashcat_rules.append("k")
word_rules = self.hashcat_rule['k'](word_rules)
# Swap last two characters
elif p == len(word_rules)-2 and self.generate_simple_hashcat_rules( self.hashcat_rule['K'](word_rules), rules[i+1:],password):
hashcat_rules.append("K")
word_rules = self.hashcat_rule['K'](word_rules)
# Swap any two characters (only adjacent swapping is supported)
elif self.generate_simple_hashcat_rules( self.hashcat_rule['*'](word_rules,p,p+1), rules[i+1:],password):
hashcat_rules.append("*%s%s" % (self.int_to_hashcat(p),self.int_to_hashcat(p+1)))
word_rules = self.hashcat_rule['*'](word_rules,p,p+1)
else:
hashcat_rules.append("o%s%s" % (self.int_to_hashcat(p),password[p]))
word_rules = self.hashcat_rule['o'](word_rules,p,password[p])
# Case Toggle: Uppercased a letter
elif word_rules[p].islower() and word_rules[p].upper() == password[p]:
# Toggle the case of all characters in word (mixed cases)
if password_upper and password_lower and self.generate_simple_hashcat_rules( self.hashcat_rule['t'](word_rules), rules[i+1:],password):
hashcat_rules.append("t")
word_rules = self.hashcat_rule['t'](word_rules)
# Capitalize all letters
elif self.generate_simple_hashcat_rules( self.hashcat_rule['u'](word_rules), rules[i+1:],password):
hashcat_rules.append("u")
word_rules = self.hashcat_rule['u'](word_rules)
# Capitalize the first letter
elif p == 0 and self.generate_simple_hashcat_rules( self.hashcat_rule['c'](word_rules), rules[i+1:],password):
hashcat_rules.append("c")
word_rules = self.hashcat_rule['c'](word_rules)
# Toggle the case of characters at position N
else:
hashcat_rules.append("T%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['T'](word_rules,p)
# Case Toggle: Lowercased a letter
elif word_rules[p].isupper() and word_rules[p].lower() == password[p]:
# Toggle the case of all characters in word (mixed cases)
if password_upper and password_lower and self.generate_simple_hashcat_rules( self.hashcat_rule['t'](word_rules), rules[i+1:],password):
hashcat_rules.append("t")
word_rules = self.hashcat_rule['t'](word_rules)
# Lowercase all letters
elif self.generate_simple_hashcat_rules( self.hashcat_rule['l'](word_rules), rules[i+1:],password):
hashcat_rules.append("l")
word_rules = self.hashcat_rule['l'](word_rules)
# Lowercase the first found character, uppercase the rest
elif p == 0 and self.generate_simple_hashcat_rules( self.hashcat_rule['C'](word_rules), rules[i+1:],password):
hashcat_rules.append("C")
word_rules = self.hashcat_rule['C'](word_rules)
# Toggle the case of characters at position N
else:
hashcat_rules.append("T%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['T'](word_rules,p)
# Special case substitution of 'all' instances (1337 $p34k)
elif word_rules[p].isalpha() and not password[p].isalpha() and self.generate_simple_hashcat_rules( self.hashcat_rule['s'](word_rules,word_rules[p],password[p]), rules[i+1:],password):
# If we have already detected this rule, then skip it thus
# reducing total rule count.
# BUG: Elisabeth => sE3 sl1 u o3Z sE3 => 31IZAB3TH
#if not "s%s%s" % (word_rules[p],password[p]) in hashcat_rules:
hashcat_rules.append("s%s%s" % (word_rules[p],password[p]))
word_rules = self.hashcat_rule['s'](word_rules,word_rules[p],password[p])
# Replace next character with current
elif p < len(password)-1 and p < len(word_rules)-1 and password[p] == password[p+1] and password[p] == word_rules[p+1]:
hashcat_rules.append(".%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['.'](word_rules,p)
# Replace previous character with current
elif p > 0 and w > 0 and password[p] == password[p-1] and password[p] == word_rules[p-1]:
hashcat_rules.append(",%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule[','](word_rules,p)
# ASCII increment
elif ord(word_rules[p]) + 1 == ord(password[p]):
hashcat_rules.append("+%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['+'](word_rules,p)
# ASCII decrement
elif ord(word_rules[p]) - 1 == ord(password[p]):
hashcat_rules.append("-%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['-'](word_rules,p)
# SHIFT left
elif ord(word_rules[p]) << 1 == ord(password[p]):
hashcat_rules.append("L%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['L'](word_rules,p)
# SHIFT right
elif ord(word_rules[p]) >> 1 == ord(password[p]):
hashcat_rules.append("R%s" % self.int_to_hashcat(p))
word_rules = self.hashcat_rule['R'](word_rules,p)
# Position based replacements.
else:
hashcat_rules.append("o%s%s" % (self.int_to_hashcat(p),password[p]))
word_rules = self.hashcat_rule['o'](word_rules,p,password[p])
if self.debug: print "\t[*] Advanced Processing Ended: %s %s" % (word_rules, " ".join(hashcat_rules))
########################################################################
# Prefix rules
last_prefix = 0
prefix_rules = list()
for hashcat_rule in hashcat_rules:
if hashcat_rule[0] == "i" and self.hashcat_to_int(hashcat_rule[1]) == last_prefix:
prefix_rules.append("^%s" % hashcat_rule[2])
last_prefix += 1
elif len(prefix_rules):
hashcat_rules = prefix_rules[::-1]+hashcat_rules[len(prefix_rules):]
break
else:
break
else:
hashcat_rules = prefix_rules[::-1]+hashcat_rules[len(prefix_rules):]
####################################################################
# Appendix rules
last_appendix = len(password) - 1
appendix_rules = list()
for hashcat_rule in hashcat_rules[::-1]:
if hashcat_rule[0] == "i" and self.hashcat_to_int(hashcat_rule[1]) == last_appendix:
appendix_rules.append("$%s" % hashcat_rule[2])
last_appendix-= 1
elif len(appendix_rules):
hashcat_rules = hashcat_rules[:-len(appendix_rules)]+appendix_rules[::-1]
break
else:
break
else:
hashcat_rules = hashcat_rules[:-len(appendix_rules)]+appendix_rules[::-1]
####################################################################
# Truncate left rules
last_precut = 0
precut_rules = list()
for hashcat_rule in hashcat_rules:
if hashcat_rule[0] == "D" and self.hashcat_to_int(hashcat_rule[1]) == last_precut:
precut_rules.append("[")
elif len(precut_rules):
hashcat_rules = precut_rules[::-1]+hashcat_rules[len(precut_rules):]
break
else:
break
else:
hashcat_rules = precut_rules[::-1]+hashcat_rules[len(precut_rules):]
####################################################################
# Truncate right rules
last_postcut = len(password)
postcut_rules = list()
for hashcat_rule in hashcat_rules[::-1]:
if hashcat_rule[0] == "D" and self.hashcat_to_int(hashcat_rule[1]) >= last_postcut:
postcut_rules.append("]")
elif len(postcut_rules):
hashcat_rules = hashcat_rules[:-len(postcut_rules)]+postcut_rules[::-1]
break
else:
break
else:
hashcat_rules = hashcat_rules[:-len(postcut_rules)]+postcut_rules[::-1]
# Check if rules result in the correct password
if word_rules == password:
return hashcat_rules
else:
if self.debug: print "[!] Advanced Processing FAILED: %s => %s => %s (%s)" % (word," ".join(hashcat_rules),password,word_rules)
return None
############################################################################
def print_hashcat_rules(self,hashcat_rules_collection):
for word,rules,password in hashcat_rules_collection:
hashcat_rules_str = " ".join(rules or [':'])
if self.verbose: print "[+] %s => %s => %s" % (word, hashcat_rules_str, password)
if not hashcat_rules_str in self.rule_stats: self.rule_stats[hashcat_rules_str] = 1
else: self.rule_stats[hashcat_rules_str] += 1
self.output_rules_f.write("%s\n" % hashcat_rules_str)
self.output_words_f.write("%s\n" % word)
############################################################################
def verify_hashcat_rules(self,hashcat_rules_collection):
for word,rules,password in hashcat_rules_collection:
f = open("%s/test.rule" % HASHCAT_PATH,'w')
f.write(" ".join(rules))
f.close()
f = open("%s/test.word" % HASHCAT_PATH,'w')
f.write(word)
f.close()
p = subprocess.Popen(["%s/hashcat-cli64.bin" % HASHCAT_PATH,"-r","%s/test.rule" % HASHCAT_PATH,"--stdout","%s/test.word" % HASHCAT_PATH], stdout=subprocess.PIPE)
out, err = p.communicate()
out = out.strip()
if out == password:
hashcat_rules_str = " ".join(rules or [':'])
if self.verbose: print "[+] %s => %s => %s" % (word, hashcat_rules_str, password)
if not hashcat_rules_str in self.rule_stats: self.rule_stats[hashcat_rules_str] = 1
else: self.rule_stats[hashcat_rules_str] += 1
self.output_rules_f.write("%s\n" % hashcat_rules_str)
self.output_words_f.write("%s\n" % word)
else:
print "[!] Hashcat Verification FAILED: %s => %s => %s (%s)" % (word," ".join(rules or [':']),password,out)
############################################################################
# Analyze a single password
def analyze_password(self,password):
if self.verbose: print "[*] Analyzing password: %s" % password
if self.verbose: start_time = time.clock()
# Skip all numeric passwords
if password.isdigit():
if self.verbose: print "[!] %s => {skipping numeric} => %s" % (password,password)
self.numeric_stats_total += 1
# Skip passwords with less than 25% of alpha character
# TODO: Make random word detection more reliable based on word entropy.
elif len([c for c in password if c.isalpha()]) < len(password)/4:
print "[!] %s => {skipping alpha less than 25%%} => %s" % (password,password)
self.special_stats_total += 1
# Only check english ascii passwords for now, add more languages in the next version
elif [c for c in password if ord(c) < 32 or ord(c) > 126]:
if self.verbose: print "[!] %s => {skipping non ascii english} => %s" % (password,password)
self.foreign_stats_total += 1
# Analyze the password
else:
if not password in self.password_stats: self.password_stats[password] = 1
else: self.password_stats[password] += 1
# Short-cut words already in the dictionary
if self.enchant.check(password):
# Record password as a source word for stats
if not password in self.word_stats: self.word_stats[password] = 1
else: self.word_stats[password] += 1
hashcat_rules_collection = [(password,[],password)]
# Generate rules for words not in the dictionary
else:
# Generate source words list
words_collection = self.generate_words_collection(password)
# Generate levenshtein rules collection for each source word
lev_rules_collection = []
for word,matrix,password in words_collection:
# Generate multiple paths to get from word to password
lev_rules = self.levenshtein_reverse_path(matrix,word,password)
for lev_rule in lev_rules:
lev_rules_collection.append((word,lev_rule,password))
# Generate hashcat rules collection
hashcat_rules_collection = self.generate_hashcat_rules_collection(lev_rules_collection)
# Print complete for each source word for the original password
if self.hashcat:
self.verify_hashcat_rules(hashcat_rules_collection)
else:
self.print_hashcat_rules(hashcat_rules_collection)
if self.verbose: print "[*] Finished analysis in %.2f seconds" % (time.clock()-start_time)
############################################################################
# Analyze passwords file
def analyze_passwords_file(self,passwords_file):
print "[*] Analyzing passwords file: %s:" % passwords_file
f = open(passwords_file,'r')
password_count = 0
analysis_start = time.clock()
try:
for password in f:
password = password.strip()
if len(password) > 0:
if password_count != 0 and password_count % 1000 == 0:
current_analysis_time = time.clock() - analysis_start
if not self.quiet: print "[*] Processed %d passwords in %.2f seconds at the rate of %.2f p/sec" % (password_count, current_analysis_time, float(password_count)/current_analysis_time )
password_count += 1
self.analyze_password(password)
except (KeyboardInterrupt, SystemExit):
print "\n[*] Rulegen was interrupted."
analysis_time = time.clock() - analysis_start
print "[*] Finished processing %d passwords in %.2f seconds at the rate of %.2f p/sec" % (password_count, analysis_time, float(password_count)/analysis_time )
password_stats_total = sum(self.password_stats.values())
print "[*] Analyzed %d passwords (%0.2f%%)" % (password_stats_total,float(password_stats_total)*100.0/float(password_count))
print "[-] Skipped %d all numeric passwords (%0.2f%%)" % (self.numeric_stats_total, float(self.numeric_stats_total)*100.0/float(password_stats_total))
print "[-] Skipped %d passwords with less than 25%% alpha characters (%0.2f%%)" % (self.special_stats_total, float(self.special_stats_total)*100.0/float(password_stats_total))
print "[-] Skipped %d passwords with non ascii characters (%0.2f%%)" % (self.foreign_stats_total, float(self.foreign_stats_total)*100.0/float(password_stats_total))
print "\n[*] Top 10 word statistics"
top100_f = open("%s-top100.word" % self.basename, 'w')
word_stats_total = sum(self.word_stats.values())
for i,(word,count) in enumerate(sorted(self.word_stats.iteritems(), key=operator.itemgetter(1), reverse=True)[:100]):
if i < 10: print "[+] %s - %d (%0.2f%%)" % (word, count, float(count)*100/float(word_stats_total))
top100_f.write("%s\n" % word)
top100_f.close()
print "[*] Saving Top 100 words in %s-top100.word" % self.basename
print "\n[*] Top 10 rule statistics"
top100_f = open("%s-top100.rule" % self.basename, 'w')
rule_stats_total = sum(self.rule_stats.values())
for i,(rule,count) in enumerate(sorted(self.rule_stats.iteritems(), key=operator.itemgetter(1), reverse=True)[:100]):
if i < 10: print "[+] %s - %d (%0.2f%%)" % (rule, count, float(count)*100/float(rule_stats_total))
top100_f.write("%s\n" % rule)
top100_f.close()
print "[*] Saving Top 100 rules in %s-top100.rule" % self.basename
print "\n[*] Top 10 password statistics"
top100_f = open("%s-top100.password" % self.basename, 'w')
password_stats_total = sum(self.password_stats.values())
for i,(password,count) in enumerate(sorted(self.password_stats.iteritems(), key=operator.itemgetter(1), reverse=True)[:100]):
if i < 10: print "[+] %s - %d (%0.2f%%)" % (password, count, float(count)*100/float(password_stats_total))
top100_f.write("%s\n" % password)
top100_f.close()
print "[*] Saving Top 100 passwords in %s-top100.password" % self.basename
f.close()
if __name__ == "__main__":
header = " _ \n"
header += " RuleGen 0.0.1 | |\n"
header += " _ __ __ _ ___| | _\n"
header += " | '_ \ / _` |/ __| |/ /\n"
header += " | |_) | (_| | (__| < \n"
header += " | .__/ \__,_|\___|_|\_\\\n"
header += " | | \n"
header += " |_| iphelix@thesprawl.org\n"
header += "\n"
parser = OptionParser("%prog [options] passwords.txt", version="%prog "+VERSION)
parser.add_option("-b","--basename", help="Output base name. The following files will be generated: basename.words, basename.rules and basename.stats", default="analysis",metavar="rockyou")
parser.add_option("-w","--wordlist", help="Use a custom wordlist for rule analysis.", metavar="wiki.dict")
parser.add_option("-q", "--quiet", action="store_true", dest="quiet", default=False, help="Don't show headers.")
wordtune = OptionGroup(parser, "Fine tune source word generation:")
wordtune.add_option("--maxworddist", help="Maximum word edit distance (Levenshtein)", type="int", default=10, metavar="10")
wordtune.add_option("--maxwords", help="Maximum number of source word candidates to consider", type="int", default=5, metavar="5")
wordtune.add_option("--morewords", help="Consider suboptimal source word candidates", action="store_true", default=False)
wordtune.add_option("--simplewords", help="Generate simple source words for given passwords", action="store_true", default=False)
parser.add_option_group(wordtune)
ruletune = OptionGroup(parser, "Fine tune rule generation:")
ruletune.add_option("--maxrulelen", help="Maximum number of operations in a single rule", type="int", default=10, metavar="10")
ruletune.add_option("--maxrules", help="Maximum number of rules to consider", type="int", default=5, metavar="5")
ruletune.add_option("--morerules", help="Generate suboptimal rules", action="store_true", default=False)
ruletune.add_option("--simplerules", help="Generate simple rules insert,delete,hashcat",action="store_true", default=False)
parser.add_option_group(ruletune)
spelltune = OptionGroup(parser, "Fine tune spell checker engine:")
spelltune.add_option("--providers", help="Comma-separated list of provider engines", default="aspell,myspell", metavar="aspell,myspell")
parser.add_option_group(spelltune)
debug = OptionGroup(parser, "Debuggin options:")
debug.add_option("-v","--verbose", help="Show verbose information.", action="store_true", default=False)
debug.add_option("-d","--debug", help="Debug rules.", action="store_true", default=False)
debug.add_option("--password", help="Process the last argument as a password not a file.", action="store_true", default=False)
debug.add_option("--word", help="Use a custom word for rule analysis", metavar="Password")
debug.add_option("--hashcat", help="Test generated rules with hashcat-cli", action="store_true", default=False)
parser.add_option_group(debug)
(options, args) = parser.parse_args()
# Print program header
if not options.quiet:
print header
if len(args) < 1:
parser.error("no passwords file specified")
exit(1)
rulegen = RuleGen(language="en", providers=options.providers, basename=options.basename)
# Finetuning word generation
rulegen.max_word_dist=options.maxworddist
rulegen.max_words=options.maxwords
rulegen.more_words=options.morewords
rulegen.simple_words=options.simplewords
# Finetuning rule generation
rulegen.max_rule_len=options.maxrulelen
rulegen.max_rules=options.maxrules
rulegen.more_rules=options.morerules
rulegen.simple_rules=options.simplerules
# Debugging options
rulegen.word = options.word
rulegen.verbose=options.verbose
rulegen.debug = options.debug
rulegen.hashcat = options.hashcat
rulegen.quiet = options.quiet
# Custom wordlist
if not options.word:
if options.wordlist: rulegen.load_custom_wordlist(options.wordlist)
print "[*] Using Enchant '%s' module. For best results please install" % rulegen.enchant.provider.name
print " '%s' module language dictionaries." % rulegen.enchant.provider.name
if not options.quiet:
print "[*] Saving rules to %s.rule" % options.basename
print "[*] Saving words to %s.word" % options.basename
print "[*] Press Ctrl-C to end execution and generate statistical analysis."
# Analyze a single password or several passwords in a file
if options.password: rulegen.analyze_password(args[0])
else: rulegen.analyze_passwords_file(args[0])

198
statsgen.py Executable file
View File

@ -0,0 +1,198 @@
#!/usr/bin/env python
# StatsGen - Password Statistical Analysis tool
#
# This tool is part of PACK (Password Analysis and Cracking Kit)
#
# VERSION 0.0.2
#
# Copyright (C) 2013 Peter Kacherginsky
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
import re, operator, string
from optparse import OptionParser
VERSION = "0.0.2"
try:
import psyco
psyco.full()
print "[*] Using Psyco to accelerate parsing."
except ImportError:
print "[?] Psyco is not available. Install Psyco on 32-bit systems for faster parsing."
password_counter = 0
# Constants
chars_regex = list()
chars_regex.append(('numeric',re.compile('^[0-9]+$')))
chars_regex.append(('loweralpha',re.compile('^[a-z]+$')))
chars_regex.append(('upperalpha',re.compile('^[A-Z]+$')))
chars_regex.append(('mixedalpha',re.compile('^[a-zA-Z]+$')))
chars_regex.append(('loweralphanum',re.compile('^[a-z0-9]+$')))
chars_regex.append(('upperalphanum',re.compile('^[A-Z0-9]+$')))
chars_regex.append(('mixedalphanum',re.compile('^[a-zA-Z0-9]+$')))
chars_regex.append(('special',re.compile('^[^a-zA-Z0-9]+$')))
chars_regex.append(('loweralphaspecial',re.compile('^[^A-Z0-9]+$')))
chars_regex.append(('upperalphaspecial',re.compile('^[^a-z0-9]+$')))
chars_regex.append(('mixedalphaspecial',re.compile('^[^0-9]+$')))
chars_regex.append(('loweralphaspecialnum',re.compile('^[^A-Z]+$')))
chars_regex.append(('upperalphaspecialnum',re.compile('^[^a-z]+$')))
chars_regex.append(('mixedalphaspecialnum',re.compile('.*')))
masks_regex = list()
masks_regex.append(('alldigit',re.compile('^\d+$', re.IGNORECASE)))
masks_regex.append(('allstring',re.compile('^[a-z]+$', re.IGNORECASE)))
masks_regex.append(('stringdigit',re.compile('^[a-z]+\d+$', re.IGNORECASE)))
masks_regex.append(('digitstring',re.compile('^\d+[a-z]+$', re.IGNORECASE)))
masks_regex.append(('digitstringdigit',re.compile('^\d+[a-z]+\d+$', re.IGNORECASE)))
masks_regex.append(('stringdigitstring',re.compile('^[a-z]+\d+[a-z]+$', re.IGNORECASE)))
masks_regex.append(('allspecial',re.compile('^[^a-z0-9]+$', re.IGNORECASE)))
masks_regex.append(('stringspecial',re.compile('^[a-z]+[^a-z0-9]+$', re.IGNORECASE)))
masks_regex.append(('specialstring',re.compile('^[^a-z0-9]+[a-z]+$', re.IGNORECASE)))
masks_regex.append(('stringspecialstring',re.compile('^[a-z]+[^a-z0-9]+[a-z]+$', re.IGNORECASE)))
masks_regex.append(('stringspecialdigit',re.compile('^[a-z]+[^a-z0-9]+\d+$', re.IGNORECASE)))
masks_regex.append(('specialstringspecial',re.compile('^[^a-z0-9]+[a-z]+[^a-z0-9]+$', re.IGNORECASE)))
def length_check(password):
return len(password)
def masks_check(password):
for (name,regex) in masks_regex:
if regex.match(password):
return name
else:
return "othermask"
def chars_check(password):
for (name,regex) in chars_regex:
if regex.match(password):
return name
else:
return "otherchar"
def advmask_check(password):
advmask = list()
for letter in password:
if letter in string.digits: advmask.append("?d")
elif letter in string.lowercase: advmask.append("?l")
elif letter in string.uppercase: advmask.append("?u")
else: advmask.append("?s")
return "".join(advmask)
def main():
password_length = dict()
masks = dict()
advmasks = dict()
chars = dict()
filter_counter = 0
total_counter = 0
header = " _ \n"
header += " StatsGen 0.0.2 | |\n"
header += " _ __ __ _ ___| | _\n"
header += " | '_ \ / _` |/ __| |/ /\n"
header += " | |_) | (_| | (__| < \n"
header += " | .__/ \__,_|\___|_|\_\\\n"
header += " | | \n"
header += " |_| iphelix@thesprawl.org\n"
header += "\n"
parser = OptionParser("%prog [options] passwords.txt", version="%prog "+VERSION)
parser.add_option("-l", "--length", dest="length_filter",help="Password length filter.",metavar="8")
parser.add_option("-c", "--charset", dest="char_filter", help="Password charset filter.", metavar="loweralpha")
parser.add_option("-m", "--mask", dest="mask_filter",help="Password mask filter", metavar="stringdigit")
parser.add_option("-o", "--maskoutput", dest="mask_output",help="Save masks to a file", metavar="masks.csv")
parser.add_option("-q", "--quiet", action="store_true", dest="quiet", default=False, help="Don't show headers.")
(options, args) = parser.parse_args()
# Print program header
if not options.quiet:
print header
if len(args) != 1:
parser.error("no passwords file specified")
exit(1)
print "[*] Analyzing passwords: %s" % args[0]
f = open(args[0],'r')
for password in f:
password = password.strip()
total_counter += 1
pass_len = length_check(password)
mask_set = masks_check(password)
char_set = chars_check(password)
advmask = advmask_check(password)
if (not options.length_filter or str(pass_len) in options.length_filter.split(',')) and \
(not options.char_filter or char_set in options.char_filter.split(',')) and \
(not options.mask_filter or mask_set in options.mask_filter.split(',')):
filter_counter += 1
try: password_length[pass_len] += 1
except: password_length[pass_len] = 1
try: masks[mask_set] += 1
except: masks[mask_set] = 1
try: chars[char_set] += 1
except: chars[char_set] = 1
try: advmasks[advmask] += 1
except: advmasks[advmask] = 1
f.close()
print "[+] Analyzing %d%% (%d/%d) passwords" % (filter_counter*100/total_counter, filter_counter, total_counter)
print " NOTE: Statistics below is relative to the number of analyzed passwords, not total number of passwords"
print "\n[*] Line Count Statistics..."
for (length,count) in sorted(password_length.iteritems(), key=operator.itemgetter(1), reverse=True):
if count*100/filter_counter > 0:
print "[+] %25d: %02d%% (%d)" % (length, count*100/filter_counter, count)
print "\n[*] Mask statistics..."
for (mask,count) in sorted(masks.iteritems(), key=operator.itemgetter(1), reverse=True):
print "[+] %25s: %02d%% (%d)" % (mask, count*100/filter_counter, count)
print "\n[*] Charset statistics..."
for (char,count) in sorted(chars.iteritems(), key=operator.itemgetter(1), reverse=True):
print "[+] %25s: %02d%% (%d)" % (char, count*100/filter_counter, count)
print "\n[*] Advanced Mask statistics..."
for (advmask,count) in sorted(advmasks.iteritems(), key=operator.itemgetter(1), reverse=True):
if count*100/filter_counter > 0:
print "[+] %25s: %02d%% (%d)" % (advmask, count*100/filter_counter, count)
if options.mask_output:
print "\n[*] Saving Mask statistics to %s" % options.mask_output
fmask = open(options.mask_output, "w")
for (advmask,count) in sorted(advmasks.iteritems(), key=operator.itemgetter(1), reverse=True):
fmask.write("%s,%d\n" % (advmask,count))
fmask.close()
if __name__ == "__main__":
main()