I’ve tons of of hundreds of thousands of rows in a textual content/csv file (genomics database btw – every file is lower than 255 characters lengthy…).
Ideally I would wish to make them searchable since proper now my finest guess is spiting them (a bit of assist from cygwin!) and studying them one after the other as a textual content file ~500mb from notepad++ (sure…i do know…) – so that is very inconvenient and caveman-like method.
I would like to make use of MySQL however possibly others, have finances of as much as $500 for Amazon cases when wanted – possibly 32gb ram some xeon gold and 200gb laborious disk on Amazon can do it? No drawback to make use of as much as 10 cases every of which doing concurrent insert/loading.
I learn somebody had achieved 300,000 rows/second utilizing ‘load information infile’ on a neighborhood server with ssd and 32gb ram – if I make it to even 50,000 rows/second after which be capable to question it with say phpmyadmin in regular time – I would be joyful. Thanks!