how their Applebot works. They've got a web taiwan business email list crawler that for the last three years or so can already render JavaScript, something that took Google 20 years or so before they had that. So they've got a mature crawler. On that page, they also outline their search ranking factors, their top five search ranking factors they have. It's things like links, web page design characteristics, which sounds like Core Web Vitals and PageRank.

So the search ranking factors look very similar to Google. They've got a crawler very similar to Google. On their page, they talk about how if you've got no Applebot specific rules in your robots.txt file, they'll follow Googlebot rules. So you can see the direction of travel. They're trying to crawl the web in a similar shape to Google. But I think that's a distraction. I think there are several things that they could do differently if they wanted to build a bigger search engine, and we're going to talk about the three differences I think they could have that would allow them to build a search engine in a very novel way that would compete with Google in a way that we haven't necessarily fully understood.