All the stackoverflow answers lead to - its complex. read a 2000 page book.
This is an exceptionally good answer and you're doing everything possible to avoid doing it, when you could have been half way done with the book by now probably. Database administration is a profession, not a job. It requires specialized training to do it well and doing everything possible to avoid that training and knowledge won't help you one bit.
my queries are not that complex.
It doesn't matter. Your database is very complex.
they simply go through the whole table to identify any duplicates
You search 10 million records on every request and you wonder why it's slow?
is there a wizard move to bypass any of my restriction or is a change in the setup and algorithm inevitable?
No. Database administration is very difficult. Reading that 2000 page book is essential for setting up infrastructure to avoid a monolithic setup like this in the first place.
the other culprit is that our server runs on a HDD which is with 150mb read and write per second probably on its edge.
lol wtf
Realistically, this setup is 10 years too old. How large is your database? Is there any reason why it can't be run in memory? 10 million lines isn't insurmountable. Full text with a moderate number of tables could be ~10GB--no reason that can't be run in memory with Redis or other in-memory database or to update to a more modern in-memory database solution like Dice.
Your biggest problem is the lack of deduplication and normalization in your database design. If it's not fixed now, it'll simply get worse YOY until it's unusable. Either spend the time and money now, or spend even more time and money later to fix it. π€·ββοΈ
tl;dr: RTFM.