bab.la only crawls content from websites that are indexed on search engines. If you want to prevent this from happening,
please use the relevant robots.txt files.
We crawl textual content from bilingual or multilingual websites via a software that identifies sentence structures and then
segments them into short phrases. The sole purpose of our crawling is to illustrate our translations with context sentences.
You may be able to identify our crawling bot under the name of "bab.la bot". This does not damage or slow down your website.
If you wish for your site to be taken off bab.la, please contact us at bot [at] bab.la.