Web3.0 - the semantic web - is just around the corner (I think we're at Web2.8.0 at the moment) which means the days of human relevance is coming to an end. We had a good run, and we contributed a lot of semi-interesting things, but now it's the machines turn.
One of the initial problems they are going to face is how to ensure their services are being consumed by their intended audience - and not bandwidth-wasting humans. That's why I've created the humans.txt captcha.
We humans may be able to read almost illegible characters and create english-like words from them, but we can't perform hundreds of repetitive mind-numbing calculations a second. Sure, we can try - but like neural network image recognition: sometimes our best just isn't good enough.