For most log and text files, simply opening them up in $editor_of_your_choice
works fine. If the file is less than a few hundred megabytes, most modern editors (and even some IDEs) shouldn't choke on them too badly.
But what if you have a 1, 2, or 10 GB logfile or giant text file you need to search through? Finding a line with a bit of text is simple enough (and not too slow) if you're using grep
. But if you want to grab a chunk of the file and edit that chunk, or split the file into smaller files, there's a simple process that I use, based on this Stack Overflow answer:
- Run the following once to find the starting line number in the file, then again to find the last line you're interested in:
grep -n 'string-to-search' giant.log | head -n 1
- Taking the results of the first search (
X
) and the second (Y
), create a smaller chunk of the giant file in the same directory:
sed -n -e 'X,Yp' -e 'Yq' giant.log > small.log
I usually don't need to reunite the smaller chunks again, but if you do, you can recombine the file using the suggestion in the original Stack Overflow answer.