Quite often I need to analyze a block of text to find the most frequently occuring words. I found sed command as the perfect workhorse to do all the grunt work for me. Effectively, the ultimate command is a series chained pipes feeding output from one task to another.
https://williamjturkel.net/2013/06/15/basic-text-analysis-with-command-line-tools-in-linux/
https://stackoverflow.com/questions/10552803/how-to-create-a-frequency-list-of-every-word-in-a-file
https://superuser.com/questions/661661/listing-all-words-in-a-text-file-and-finding-the-most-frequent-word
https://stackoverflow.com/questions/33055663/removing-stopwords-from-text-corpus-using-linux-commandline
This is the magic recipe:
sed -e 's/[^[:alpha:]]/ /g' test.txt | tr '\n' " " | tr -s " " | tr " " '\n' | sed '/^.$/d' | tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | nl | head -n 5
SELECT * FROM soil_survey INTO OUTFILE '/var/lib/mysql-files/soil_survey.csv' FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';
sudo mv /var/lib/mysql-files/soil_survey.csv data-import-directory/
sed -i '1i Hort_Client,Contractor,Region,Locality,Soil_Service,Solution,Soil_Issue,Date_Reported,Date_Actioned,DaysToAction' data-import-directory/soil_survey.csv
sed -i '1d' import-directory/soil_survey.csv
head -3 import-directory/soil_survey.csv
Hort_Client,Contractor,Region,Locality,Soil_Service,Solution,Soil_Issue,Date_Reported,Date_Actioned,DaysToAction
159,1091,Northbury,3656,54593,5397,Erosion,2007-05-07,2008-02-18,287
159,1091,Northbury,1516,22644,5397,Erosion,2007-05-07,2008-03-18,316
You now have a workable CSV data file that you can import into a Neo4j graph