Powered by https://sitedoctor.peoplentools.com ()

health report : https://www.aydoo.fr/

examined at : 24-05-13 12:41:24

follow recommendations of this health report to keep your site healthy

https://www.aydoo.fr/
Website Health Report powered by https://sitedoctor.peoplentools.com
  May 13, 2024 12:41:24
25.4 / 100
Overall Score
80.52 / 100
Desktop Score
59.11 / 100
Mobile Score

Compare
Download Pdf
Please provide your information, a download link will be sent to you

Page Title

Page Title

Comparateur assurances pas cher en ligne

Short Recommendation

Your page title does not exceed 60 characters. It's fine.

Title is the heading of the webpage. The sentence or string enclosed between html title tag () is the title of your website. Search engines searches for the title of your website and displays title along with your website address on search result. Title is the most important element for both SEO and social sharing. Title should be less than 50 to 60 characters because search engine typically displays this length of string or sentence on search result. A good title can consist the primary keyword, secondary keyword and brand name. For example a fictitious gaming information providing sites title may be like "the future of gaming information is here". A webpage title should contain a proper glimpse of the website. title is important element as an identification of your website for user experience, SEO and social sharing. So have a nice and catching title.
Learn more

Meta Description

Meta Description

Aydoo est un comparateur assurances innovant, comparateur indépendant il ne se fait pas payer par les assureurs qu'il compare

Short Recommendation

Your meta description does not exceed 150 characters. It's fine.

description_recommendation

Meta Keyword

Meta Keyword

Short Recommendation

Your site do not have any meta keyword.

Meta keywords are keywords inside Meta tags. Meta keywords are not likely to be used for search engine ranking. the words of title and description can be used as meta keywords. it is a good idea for SEO other than search engine ranking.

Keyword Analysis

Single Keywords

Keyword Occurrence Density Possible Spam
de 72 5.252 % No
d’assurance 45 3.282 % No
les 42 3.063 % No
vous 28 2.042 % No
des 25 1.823 % No
en 25 1.823 % No
la 24 1.751 % No
à 24 1.751 % No
votre 15 1.094 % No
le 14 1.021 % No
comparateurs 14 1.021 % No
pour 14 1.021 % No
police 12 0.875 % No
comparateur 12 0.875 % No
une 11 0.802 % No
vos 11 0.802 % No
besoins 10 0.729 % No
10 0.729 % No
assurance 9 0.656 % No
il 8 0.584 % No

Two Word Keywords

Keyword Occurrence Density Possible Spam
comparateurs d’assurance 13 0.948 % No
que vous 11 0.802 % No
la police 10 0.729 % No
comparateur d’assurance 10 0.729 % No
vos besoins 9 0.656 % No
de la 9 0.656 % No
un comparateur 8 0.584 % No
en ligne 7 0.511 % No
d’assurance et 6 0.438 % No
les comparateurs 6 0.438 % No
une assurance 5 0.365 % No
il est 5 0.365 % No
et de 5 0.365 % No
comparer les 5 0.365 % No
vous avez 5 0.365 % No
garanties et 5 0.365 % No
et les 5 0.365 % No
trouver la 4 0.292 % No
compagnies d’assurance 4 0.292 % No
sur le 4 0.292 % No

Three Word Keywords

Keyword Occurrence Density Possible Spam
un comparateur d’assurance 6 0.438 % No
les comparateurs d’assurance 5 0.365 % No
les différentes offres 4 0.292 % No
à vos besoins 4 0.292 % No
Une fois que 4 0.292 % No
fois que vous 4 0.292 % No
que vous avez 4 0.292 % No
prendre en compte 4 0.292 % No
de la police 4 0.292 % No
sur le marché 3 0.219 % No
trouver la police 3 0.219 % No
la police d’assurance 3 0.219 % No
comparer les différentes 3 0.219 % No
utiliser un comparateur 3 0.219 % No
les détails de 3 0.219 % No
d’assurance il est 3 0.219 % No
comparateurs d’assurance vous 3 0.219 % No
d’assurance vous permettent 3 0.219 % No
vos besoins en 3 0.219 % No
besoins en assurance 3 0.219 % No

Four Word Keywords

Keyword Occurrence Density Possible Spam
Une fois que vous 4 0.292 % No
fois que vous avez 4 0.292 % No
comparer les différentes offres 3 0.219 % No
comparateurs d’assurance vous permettent 3 0.219 % No
vos besoins en assurance 3 0.219 % No
une liste de devis 3 0.219 % No
liste de devis personnalisés 3 0.219 % No
les différentes offres en 3 0.219 % No
les garanties et les 3 0.219 % No
– Une fois que 3 0.219 % No
disponibles sur le marché 2 0.146 % No
sur le marché il 2 0.146 % No
il peut être difficile 2 0.146 % No
peut être difficile de 2 0.146 % No
être difficile de s’y 2 0.146 % No
difficile de s’y retrouver 2 0.146 % No
Je compare les offres 2 0.146 % No
trouver la police d’assurance 2 0.146 % No
la police d’assurance parfaite 2 0.146 % No
police d’assurance parfaite pour 2 0.146 % No

Keyword Usage

Keyword Usage

Short Recommendation

The most using keywords do not match with meta keywords.

Keyword usage is the using of your keywords inside Meta tags and contents of your website. Use keywords that describes your site properly for precise search engine result of your website.

Sitemap

Short Recommendation

Your site have sitemap

Location

https://www.aydoo.fr/sitemap_index.xml

Sitemap is a xml file which contain full list of your website urls. It is used to include directories of your websites for crawling and indexing for search engine and access for users. it can help search engine robots for indexing your website more fast and deeply. It is roughly an opposite of robots.txt You can create a sitemap.xml by various free and paid service or you can write it with proper way (read about how write a sitemap).

Also keep these things in mind:
1) Sitemap must be less than 10 MB (10,485,760 bytes) and can contain maximum 50,000 urls. if you have more uls than this create multiple sitemap files and use a sitemap index file.
2) Put your sitemap in website root directory and add the url of your sitemap in robots.txt.
3) sitemap.xml can be compressed using grip for faster loading.

Broken link: a broken link is an inaccessible link or url of a website. a higher rate of broken links have a negative effect on search engine ranking due to reduced link equity. it also has a bad impact on user experience. There are several reasons for broken link. All are listed below.
1) An incorrect link entered by you.
2) The destination website removed the linked web page given by you. (A common 404 error).
3) The destination website is irreversibly moved or not exists anymore. (Changing domain or site blocked or dysfunctional).
4) User may behind some firewall or alike software or security mechanism that is blocking the access to the destination website.
5) You have provided a link to a site that is blocked by firewall or alike software for outside access.
Learn more or Learn more

Total Words

Total Words

1371

Unique words are uncommon words that reflects your site features and informations. Search engine metrics are not intended to use unique words as ranking factor but it is still useful to get a proper picture of your site contents. Using positive unique words like complete, perfect, shiny, is a good idea user experience.

Stop words are common words like all the preposition, some generic words like download, click me, offer, win etc. since most used keyword may be a slight factor for visitors you are encouraged to use more unique words and less stop words.

Text/HTML Ratio Test

Site failed text/HTML ratio test.

Text/HTML Ratio Test : 4%

The ideal page's ratio of text to HTML code must be lie between 20 to 60%. Because if it is come less than 20% it means you need to write more text in your web page while in case of more than 60% your page might be considered as spam.

HTML Headings

  • H1(1)
  • Comparateur assurances
  • H2(6)
  • Introduction aux comparateurs d’assurance
  • Pourquoi utiliser un comparateur d’assurance ?
  • Comment fonctionnent les comparateurs d’assurance ?
  • Facteurs à prendre en compte pour comparer les polices d’assurance
  • Comparateurs d’assurance et méthodes traditionnelles d’achat d’assurance
  • Une approche pas à pas pour trouver la police parfaite
  • H3(1)
  • Advantages
  • H4(0)
  • H5(0)
  • H6(0)

h1 status is the existence of any content inside h1 tag. Although not important like Meta titles and descriptions for search engine ranking but still a good way to describe your contents in search engine result.

h2 status less important but should be used for proper understanding of your website for visitor.

robot.txt

Short Recommendation

Your site have robot.txt

  • robot.txt
  • User-agent: * Disallow: /wp-admin/ Disallow: /comments/ Disallow: /search/ Disallow: /sample-page/ Disallow: /notices/ Disallow: /common/ Disallow: /search/ Allow: /news/wp-admin/admin-ajax.php Sitemap: https://www.aydoo.fr/sitemap_index.xml Disallow: User-Agent: AITCSRobot/1.1 User-Agent: Alexibot User-Agent: Aqua_Products User-Agent: Arachnophilia User-Agent: ASpider/0.09 User-Agent: asterias User-Agent: AtraxSolutions User-Agent: AURESYS/1.0 User-Agent: b2w/0.1 User-Agent: BackDoorBot User-Agent: BackDoorBot/1.0 User-Agent: BackRub/. User-Agent: Bad Bots User-Agent: Baiduspider-video User-Agent: BecomeBot User-Agent: Big Brother User-Agent: Bizbot003 User-Agent: BizBot04 kirk.overleaf.com User-Agent: Black Hole User-Agent: Black.Hole User-Agent: BlackWidow User-Agent: BLEXBot User-Agent: BlowFish User-Agent: BlowFish/1.0 User-Agent: Bookmark search tool User-Agent: Bot mailto:craftbot@yahoo.com User-Agent: BotALot User-Agent: BotRightHere User-Agent: BSpider/1.0 libwww-perl/0.40 User-Agent: BuiltBotTough User-Agent: Bullseye User-Agent: Bullseye/1.0 User-Agent: BunnySlippers User-Agent: CACTVS Chemistry Spider User-Agent: CamontSpider User-Agent: ccbot User-Agent: Cegbfeieh User-Agent: ChangeDetection User-Agent: Checkbot/x.xx LWP/5.x User-Agent: CheeseBot User-Agent: CherryPicker User-Agent: CherryPickerElite/1.0 User-Agent: CherryPickerSE/1.0 User-Agent: ChinaClaw User-Agent: Cliqzbot User-Agent: combine/0.0 User-Agent: conceptbot/0.3 User-Agent: Copernic User-Agent: CopyRightCheck User-Agent: cosmos User-Agent: Crescent User-Agent: Crescent Internet ToolPak HTTP OLE Control v.1.0 User-Agent: Custo User-Agent: CyberPatrol SiteCat Webbot User-Agent: CyberSpyder/2.1 User-Agent: Daumoa User-Agent: Deweb/1.01 User-Agent: DIIbot User-Agent: DISCo User-Agent: DISCo Pump 3.0 User-Agent: DISCo Pump 3.2 User-Agent: discobot User-Agent: DISCoFinder User-Agent: DittoSpyder User-Agent: DOC User-Agent: dotbot User-Agent: Download Demon User-Agent: Download Demon/3.2.0.8 User-Agent: Download Demon/3.5.0.11 User-Agent: Download Ninja User-Agent: dumbot User-Agent: eCatch User-Agent: eCatch/3.0 User-Agent: EirGrabber User-Agent: EmailCollector User-Agent: EmailSiphon User-Agent: EmailWolf User-Agent: EnigmaBot User-Agent: Enterprise_Search User-Agent: Enterprise_Search/1.0 User-Agent: EroCrawler User-Agent: es User-Agent: Exabot User-Agent: explorersearch User-Agent: Express WebPictures User-Agent: Express WebPictures (www.express-soft.com) User-Agent: ExtractorPro User-Agent: EyeNetIE User-Agent: FairAd Client User-Agent: FelixIDE/1.0 User-Agent: Fetch User-Agent: fido/0.9 Harvest/1.4.pl2 User-Agent: Fish-Search-Robot User-Agent: Flaming AttackBot User-Agent: FlashGet User-Agent: FlashGet WebWasher 3.2 User-Agent: Foobot User-Agent: Freecrawl User-Agent: FreeFind User-Agent: FrontPage User-Agent: FrontPage [NC,OR] User-Agent: Gaisbot User-Agent: gcreep/1.0 User-Agent: GetRight User-Agent: GetRight/2.11 User-Agent: GetRight/3.1 User-Agent: GetRight/3.2 User-Agent: GetRight/3.3 User-Agent: GetRight/3.3.3 User-Agent: GetRight/3.3.4 User-Agent: GetRight/4.0.0 User-Agent: GetRight/4.1.0 User-Agent: GetRight/4.1.1 User-Agent: GetRight/4.1.2 User-Agent: GetRight/4.2 User-Agent: GetRight/4.2b (Portuguxeas) User-Agent: GetRight/4.2c User-Agent: GetRight/4.3 User-Agent: GetRight/4.5 User-Agent: GetRight/4.5a User-Agent: GetRight/4.5b User-Agent: GetRight/4.5b1 User-Agent: GetRight/4.5b2 User-Agent: GetRight/4.5b3 User-Agent: GetRight/4.5b6 User-Agent: GetRight/4.5b7 User-Agent: GetRight/4.5c User-Agent: GetRight/4.5d User-Agent: GetRight/4.5e User-Agent: GetRight/5.0beta1 User-Agent: GetRight/5.0beta2 User-Agent: GetURL.rexx v1.05 User-Agent: GetWeb! User-Agent: Go!Zilla User-Agent: Go!Zilla (www.gozilla.com) User-Agent: Go!Zilla 3.3 (www.gozilla.com) User-Agent: Go!Zilla 3.5 (www.gozilla.com) User-Agent: Go-Ahead-Got-It User-Agent: Golem/1.1 User-Agent: GrabNet User-Agent: Grafula User-Agent: Gromit/1.0 User-Agent: grub User-Agent: grub-client User-Agent: Hmhkki/0.2 User-Agent: HappyFunBot User-Agent: Harvest User-Agent: Harvest/1.5 User-Agent: Hatena Antenna User-Agent: Hazel's Ferret Web hopper User-Agent: Heritrix User-Agent: hloader User-Agent: HMView User-Agent: httplib User-Agent: HTTrack User-Agent: HTTrack [NC,OR] User-Agent: HTTrack 3.0 User-Agent: Huaweisymantecspider User-Agent: humanlinks User-Agent: Image Stripper User-Agent: Image Sucker User-Agent: inagist.com url crawler User-Agent: IncyWincy/1.0b1 User-Agent: Indy Library User-Agent: Indy Library [NC,OR] User-Agent: InfoNaviRobot User-Agent: Informant User-Agent: INGRID/0.1 User-Agent: InterGET User-Agent: Internet Ninja User-Agent: Internet Ninja 4.0 User-Agent: Internet Ninja 5.0 User-Agent: Internet Ninja 6.0 User-Agent: Iron33/1.0.2 User-Agent: IsraeliSearch/1.0 User-Agent: ITI Spider User-Agent: JennyBot User-Agent: Jetbot User-Agent: Jetbot/1.0 User-Agent: JetCar User-Agent: JOC Web Spider User-Agent: JubiiRobot User-Agent: jumpstation User-Agent: k2spider User-Agent: Katipo/1.0 User-Agent: Kenjin Spider User-Agent: Kenjin.Spider User-Agent: Keyword Density/0.9 User-Agent: Keyword.Density User-Agent: KIT-Fireball/2.0 libwww/5.0a User-Agent: LabelGrab/1.1 User-Agent: larbin User-Agent: larbin (samualt9@bigfoot.com) User-Agent: larbin samualt9@bigfoot.com User-Agent: larbin_2.6.2 (kabura@sushi.com) User-Agent: larbin_2.6.2 (larbin2.6.2@unspecified.mail) User-Agent: larbin_2.6.2 (listonATccDOTgatechDOTedu) User-Agent: larbin_2.6.2 (vitalbox1@hotmail.com) User-Agent: larbin_2.6.2 kabura@sushi.com User-Agent: larbin_2.6.2 larbin@correa.org User-Agent: larbin_2.6.2 larbin2.6.2@unspecified.mail User-Agent: larbin_2.6.2 listonATccDOTgatechDOTedu User-Agent: larbin_2.6.2 vitalbox1@hotmail.com User-Agent: LeechFTP User-Agent: LexiBot User-Agent: libWeb/clsHTTP User-Agent: libwww User-Agent: LinkextractorPro User-Agent: linklooker User-Agent: linko User-Agent: LinkScan/8.1a Unix User-Agent: LinkScan/8.1a.Unix User-Agent: LinkWalker User-Agent: LNSpiderguy User-Agent: lwp-trivial User-Agent: lwp-trivial/1.34 User-Agent: Mass Downloader User-Agent: Mass Downloader/2.2 User-Agent: Mata Hari User-Agent: Mata.Hari User-Agent: MediaFox/x.y User-Agent: MerzScope User-Agent: METAGOPHER User-Agent: Microsoft URL Control User-Agent: Microsoft URL Control - 5.01.4511 User-Agent: Microsoft URL Control - 6.00.8169 User-Agent: Microsoft.URL User-Agent: Microsoft.URL.Control User-Agent: MIDown tool User-Agent: MIIxpc User-Agent: MIIxpc/4.2 User-Agent: Mister PiX User-Agent: Mister Pix II 2.01 User-Agent: Mister Pix II 2.02a User-Agent: Mister PiX version.dll User-Agent: Mister.PiX User-Agent: moget User-Agent: moget/2.1 User-Agent: MOMspider/1.00 libwww-perl/0.40 User-Agent: Motor/0.2 User-Agent: Mozilla/4.0 (compatible; BullsEye; Windows 95) User-Agent: MSIECrawler User-Agent: naver User-Agent: Navroad User-Agent: NearSite User-Agent: NeoScioCrawler User-Agent: Net Vampire User-Agent: Net Vampire/3.0 User-Agent: NetAnts User-Agent: NetAnts/1.10 User-Agent: NetAnts/1.23 User-Agent: NetAnts/1.24 User-Agent: NetAnts/1.25 User-Agent: NetCarta CyberPilot Pro User-Agent: NetMechanic User-Agent: NetScoop/1.0 libwww/5.0a User-Agent: NetSpider User-Agent: NetZIP User-Agent: NetZip Downloader 1.0 Win32(Nov 12 1998) User-Agent: NetZip-Downloader/1.0.62 (Win32; Dec 7 1998) User-Agent: NetZippy+(http://www.innerprise.net/usp-spider.asp) User-Agent: NHSEWalker/3.0 User-Agent: NICErsPRO User-Agent: Nomad-V2.x User-Agent: NPBot User-Agent: Nutch User-Agent: Occam/1.0 User-Agent: Octopus User-Agent: Offline Explorer User-Agent: Offline Explorer/1.2 User-Agent: Offline Explorer/1.4 User-Agent: Offline Explorer/1.6 User-Agent: Offline Explorer/1.7 User-Agent: Offline Explorer/1.9 User-Agent: Offline Explorer/2.0 User-Agent: Offline Explorer/2.1 User-Agent: Offline Explorer/2.3 User-Agent: Offline Explorer/2.4 User-Agent: Offline Explorer/2.5 User-Agent: Offline Navigator User-Agent: Offline.Explorer User-Agent: OGspider User-Agent: Open Text Site Crawler V1.0 User-Agent: Openbot User-Agent: Openfind User-Agent: Openfind data gathere User-Agent: Openfind data gatherer User-Agent: Oracle Ultra Search User-Agent: PageGrabber User-Agent: panscient.com User-Agent: Papa Foto User-Agent: pavuk User-Agent: pcBrowser User-Agent: PerMan User-agent: PetalBot User-Agent: PGP-KA/1.2 User-Agent: ProPowerBot/2.14 User-Agent: ProWebWalker User-Agent: psbot User-Agent: Python-urllib User-Agent: QuepasaCreep User-Agent: QueryN Metasearch User-Agent: QueryN.Metasearch User-Agent: R6_CommentReader User-Agent: R6_FeedFetcher User-Agent: Radiation Retriever 1.1 User-Agent: RealDownload User-Agent: RealDownload/4.0.0.40 User-Agent: RealDownload/4.0.0.41 User-Agent: RealDownload/4.0.0.42 User-Agent: ReGet User-Agent: RepoMonkey User-Agent: RepoMonkey Bait & Tackle/v1.01 User-Agent: Resume Robot User-Agent: RMA User-Agent: Roverbot User-Agent: SafetyNet Robot 0.1 User-Agent: SapphireWebCrawler User-Agent: ScoutJet User-Agent: searchpreview User-Agent: Senrigan/xxxxxx User-Agent: sitecheck.internetseer.com User-Agent: SiteSnagger User-Agent: SlySearch User-Agent: SmartDownload User-Agent: SmartDownload/1.2.76 (Win32; Apr 1 1999) User-Agent: SmartDownload/1.2.77 (Win32; Aug 17 1999) User-Agent: SmartDownload/1.2.77 (Win32; Feb 1 2000) User-Agent: SmartDownload/1.2.77 (Win32; Jun 19 2001) User-Agent: Snooper/b97_01 User-Agent: Solbot/1.0 LWP/5.07 User-Agent: sootle User-Agent: SpankBot User-Agent: spanner User-Agent: Spanner/1.0 (Linux 2.0.27 i586) User-Agent: spyder3.microsys.com User-Agent: Sqworm/2.9.85-BETA (beta_release; 20011115-775; i686-pc-linux User-Agent: Stanford User-Agent: Stanford Comp Sci User-Agent: SuperBot User-Agent: SuperBot/3.0 (Win32) User-Agent: SuperBot/3.1 (Win32) User-Agent: SuperHTTP User-Agent: SuperHTTP/1.0 User-Agent: Surfbot User-Agent: suzuran User-Agent: Szukacz/1.4 User-Agent: tAkeOut User-Agent: Teleport User-Agent: Teleport Pro User-Agent: Teleport Pro/1.29 User-Agent: Teleport Pro/1.29.1590 User-Agent: Teleport Pro/1.29.1634 User-Agent: Teleport Pro/1.29.1718 User-Agent: Teleport Pro/1.29.1820 User-Agent: Teleport Pro/1.29.1847 User-Agent: TeleportPro User-Agent: Telesoft User-Agent: The Intraformant User-Agent: The.Intraformant User-Agent: TheNomad User-Agent: TightTwatBot User-Agent: Titan User-Agent: toCrawl/UrlDispatcher User-Agent: trendictionbot User-Agent: True_Robot User-Agent: True_Robot/1.0 User-Agent: turingos User-Agent: TurnitinBot User-Agent: TurnitinBot/1.5 User-Agent: twiceler User-Agent: UbiCrawler User-Agent: UCSD-Crawler User-Agent: UnisterBot User-Agent: UnwindFetchor/1.0 User-Agent: URL Control User-Agent: URL_Spider_Pro User-Agent: urlck/1.2.3 User-Agent: URLSpiderPro User-Agent: URLy Warning User-Agent: URLy.Warning User-Agent: Valkyrie/1.0 libwww-perl/0.40 User-Agent: vBSEO User-Agent: VCI User-Agent: VCI WebViewer VCI WebViewer Win32 User-Agent: VoidEYE User-Agent: Web Image Collector User-Agent: Web Sucker User-Agent: Web.Image.Collector User-Agent: WebAuto User-Agent: WebAuto/3.40 (Win98; I) User-Agent: WebBandit User-Agent: WebBandit/3.50 User-Agent: WebCapture 2.0 User-Agent: WebCopier User-Agent: WebCopier v.2.2 User-Agent: WebCopier v2.5 User-Agent: WebCopier v2.6 User-Agent: WebCopier v2.7a User-Agent: WebCopier v2.8 User-Agent: WebCopier v3.0 User-Agent: WebCopier v3.0.1 User-Agent: WebCopier v3.2 User-Agent: WebCopier v3.2a User-Agent: WebCopy/ User-Agent: WebCrawler/3.0 Robot libwww/5.0a User-Agent: WebEMailExtrac.* User-Agent: WebEnhancer User-Agent: WebFerret User-Agent: WebFetch User-Agent: webfetch/2.1.0 User-Agent: WebFetcher/0.8, User-Agent: WebGo IS User-Agent: weblayers/0.0 User-Agent: WebLeacher User-Agent: WebLinker/0.0 libwww-perl/0.1 User-Agent: WebmasterWorld Extractor User-Agent: WebmasterWorldForumBot User-Agent: WebMoose/0.0.0000 User-Agent: WebReaper User-Agent: WebReaper [info@webreaper.net] User-Agent: WebReaper [webreaper@otway.com] User-Agent: WebReaper v9.1 - www.otway.com/webreaper User-Agent: WebReaper v9.7 - www.webreaper.net User-Agent: WebReaper v9.8 - www.webreaper.net User-Agent: WebReaper vWebReaper v7.3 - www,otway.com/webreaper User-Agent: webs@recruit.co.jp User-Agent: WebSauger User-Agent: WebSauger 1.20b User-Agent: WebSauger 1.20j User-Agent: WebSauger 1.20k User-Agent: Website eXtractor User-Agent: Website Quester User-Agent: Website Quester - www.asona.org User-Agent: Website Quester - www.esalesbiz.com/extra/ User-Agent: Website.Quester User-Agent: Webster Pro User-Agent: Webster.Pro User-Agent: WebStripper User-Agent: WebStripper/2.03 User-Agent: WebStripper/2.10 User-Agent: WebStripper/2.12 User-Agent: WebStripper/2.13 User-Agent: WebStripper/2.15 User-Agent: WebStripper/2.16 User-Agent: WebStripper/2.19 User-Agent: WebVac User-Agent: webvac/1.0 User-Agent: webwalk User-Agent: WebWalker User-Agent: WebWalker/1.10 User-Agent: WebWatch User-Agent: WebWhacker User-Agent: WebZip User-Agent: WebZIP/2.75 (http://www.spidersoft.com) User-Agent: WebZIP/3.65 (http://www.spidersoft.com) User-Agent: WebZIP/3.80 (http://www.spidersoft.com) User-Agent: WebZip/4.0 User-Agent: WebZIP/4.0 (http://www.spidersoft.com) User-Agent: WebZIP/4.1 (http://www.spidersoft.com) User-Agent: WebZIP/4.21 User-Agent: WebZIP/4.21 (http://www.spidersoft.com) User-Agent: WebZIP/5.0 User-Agent: WebZIP/5.0 (http://www.spidersoft.com) User-Agent: WebZIP/5.0 PR1 (http://www.spidersoft.com) User-Agent: Wget User-Agent: Wget/1.4.0 User-Agent: Wget/1.5.2 User-Agent: Wget/1.5.3 User-Agent: Wget/1.6 User-Agent: Wget/1.7 User-Agent: Wget/1.8 User-Agent: Wget/1.8.1 User-Agent: Wget/1.8.1+cvs User-Agent: Wget/1.8.2 User-Agent: Wget/1.9-beta User-Agent: WhoWhere Robot User-Agent: Widow User-Agent: wired-digital-newsbot/1.5 User-Agent: WWW Collector User-Agent: www.freeloader.com. User-Agent: WWW-Collector-E User-Agent: WWWOFFLE User-Agent: WWWWanderer v3.0 User-Agent: Xaldon WebSpider User-Agent: Xaldon WebSpider 2.5.b3 User-Agent: Xaldon_WebSpider User-Agent: Xenu User-Agent: Xenu's User-Agent: Xenu's Link Sleuth 1.1c User-Agent: XGET/0.7 User-Agent: Yahoo Pipes 1.0 User-Agent: Yahoo Pipes 2.0 User-Agent: Yandex User-Agent: YandexSomething User-Agent: Yasaklibot User-Agent: yes User-Agent: YesupBot User-Agent: Yeti User-Agent: Zao User-Agent: Zealbot User-Agent: Zeus User-Agent: Zeus 11389 Webster Pro V2.9 Win32 User-Agent: Zeus 11652 Webster Pro V2.9 Win32 User-Agent: Zeus 18018 Webster Pro V2.9 Win32 User-Agent: Zeus 26378 Webster Pro V2.9 Win32 User-Agent: Zeus 30747 Webster Pro V2.9 Win32 User-Agent: Zeus 32297 Webster Pro V2.9 Win32 User-Agent: Zeus 39206 Webster Pro V2.9 Win32 User-Agent: Zeus 41641 Webster Pro V2.9 Win32 User-Agent: Zeus 44238 Webster Pro V2.9 Win32 User-Agent: Zeus 51070 Webster Pro V2.9 Win32 User-Agent: Zeus 51674 Webster Pro V2.9 Win32 User-Agent: Zeus 51837 Webster Pro V2.9 Win32 User-Agent: Zeus 63567 Webster Pro V2.9 Win32 User-Agent: Zeus 6694 Webster Pro V2.9 Win32 User-Agent: Zeus 82016 Webster Pro V2.9 Win32 User-Agent: Zeus 82900 Webster Pro V2.9 Win32 User-Agent: Zeus 84842 Webster Pro V2.9 Win32 User-Agent: Zeus 90872 Webster Pro V2.9 Win32 User-Agent: Zeus 94934 Webster Pro V2.9 Win32 User-Agent: Zeus 95245 Webster Pro V2.9 Win32 User-Agent: Zeus 95351 Webster Pro V2.9 Win32 User-Agent: Zeus 97371 Webster Pro V2.9 Win32 User-Agent: Zeus Link Scout User-Agent: ZyBORG Disallow: /


robots.txt is text file that reside on website root directory and contains the instruction for various robots (mainly search engine robots) for how to crawl and indexing your website for their webpage. robots.txt contains the search bots or others bots name, directory list allowed or disallowed to be indexing and crawling for bots, time delay for bots to crawl and indexing and even the sitemap url. A full access or a full restriction or customized access or restriction can be imposed through robots.txt.

robots.txt is very important for SEO. Your website directories will be crawled and indexed on search engine according to robots.txt instructions. So add a robots.txt file in your website root directory. Write it properly including your content enriched pages and other public pages and exclude any pages which contain sensitive information. Remember robots.txt instruction to restrict access to your sensitive information of your page is not formidable on web page security ground. So do not use it on security purpose.
Learn more

Internal Vs. External Links

Total Internal Links?

10

Total External Links?

0
  • Internal Links
  • https://www.aydoo.fr/
  • https://www.aydoo.fr/assurance-emprunteur/
  • https://www.aydoo.fr/assurance-habitation/
  • https://www.aydoo.fr/assurance-auto/
  • https://www.aydoo.fr/mutuelle/
  • https://www.aydoo.fr/mutuelle/mutuelle-dentaire/
  • https://www.aydoo.fr/mutuelle/mutuelle-optique/
  • https://www.aydoo.fr/mutuelle/assureurs/
  • https://www.aydoo.fr/assurance-voyage/
  • https://www.aydoo.fr/assurance-decennale/
  • External Links

Domain IP Information

IP

87.98.161.117

City

Strasbourg

Country

FR

Time Zone

Europe/Paris

Longitude

7.7455

Latitude

48.5839

NoIndex , NoFollow, DoFollow Links

Total NoIndex Links

0

Total NoFollow Links

0

Total DoFollow Links

10

NoIndex Enabled by Meta Robot?

No

NoFollow Enabled by Meta Robot?

No
  • NoIndex Links
  • NoFollow Links

NoIndex : noindex directive is a meta tag value. noindex directive is for not to show your website on search engine results. You must not set ‘noindex’ as value in meta tags if you want to be your website on search engine result.

By default, a webpage is set to “index.” You should add a <meta name="robots" content="noindex" /> directive to a webpage in the <head> section of the HTML if you do not want search engines to crawl a given page and include it in the SERPs (Search Engine Results Pages).

DoFollow & NoFollow : nofollow directive is a meta tag value. Nofollow directive is for not to follow any links of your website by search engine bots. You must not set ‘nofollow’ as value in meta tags if you want follow your link by search engine bots.

By default, links are set to “follow.” You would set a link to “nofollow” in this way: <a href="http://www.example.com/" rel="nofollow">Anchor Text</a> if you want to suggest to Google that the hyperlink should not pass any link equity/SEO value to the link target.

Learn more

SEO Friendly Links

Short Recommendation

Links of your site are SEO friendly.


An SEO friendly link is roughly follows these rules. The url should contain dash as a separator, not to contain parameters and numbers and should be static urls.

To resolve this use these techniques.
1) Replace underscore or other separator by dash, clean url by deleting or replaceing number and parameters.
2) Marge your www and non www urls.
3) Do not use dynamic and related urls. Create an xml sitemap for proper indexing of search engine.
4) Block unfriendly and irrelevant links through robots.txt.
5) Endorse your canonical urls in canonical tag.
Learn more

Plain Text Email Test

Short Recommendation

Site passed plain text email test. No plain text email found.


Plain text email address is vulnerable to email scrapping agents. An email scrapping agent crawls your website and collects every Email address which written in plain text. So existence of plain text email address in your website can help spammers in email Harvesting. This could be a bad sign for search engine.

To fight this you can obfuscate your email addresses in several ways:
1) CSS pseudo classes.
2) Writing backward your email address.
3) Turn of display using css.
4) Obfuscate your email address using javascript.
5) Using wordpress and php (wordpress site only).
Learn more

Favicon

Short Recommendation

Your site have favicon.

DOC Type

DOC Type : <!DOCTYPE html>
Short Recommendation

Page have doc type.

doc type is not SEO factor but it is checked for validating your web page. So set a doctype at your html page.
Learn more

Image 'alt' Test

Short Recommendation

Your site have 2 images without alt text.

  • Images Without alt
  • https://www.aydoo.fr/wp-content/uploads/2023/12/assurance.jpg
  • https://www.aydoo.fr/wp-content/uploads/2023/12/comparateur-assurance-etapes.jpg

An alternate title for image. Alt attribute content to describe an image. It is necessary for notifying search engine spider and improve actability to your website. So put a suitable title for your image at least those are your website content not including the images for designing your website. To resolve this put a suitable title in your alt attributes.
Learn more

Depreciated HTML Tag

Short Recommendation

Your site does not have any depreciated HTML tag.


Older HTML tags and attributes that have been superseded by other more functional or flexible alternatives (whether as HTML or as CSS ) are declared as deprecated in HTML4 by the W3C - the consortium that sets the HTML standards. Browsers should continue to support deprecated tags and attributes, but eventually these tags are likely to become obsolete and so future support cannot be guaranteed.

HTML Page Size

HTML Page Size : 255 KB
Short Recommendation

HTML page size is > 100KB

HTML page size is the one of the main factors of webpage loading time. It should be less than 100 KB according to google recommendation. Note that, this size not including external css, js or images files. So small page size less loading time.

To reduce your page size do this steps
1) Move all your css and js code to external file.
2) make sure your text content be on top of the page so that it can displayed before full page loading.
3) Reduce or compress all the image, flash media file etc. will be better if these files are less than 100 KB
Learn more

GZIP Compression

Short Recommendation

GZIP compression is disabled.

GZIP is a generic compressor that can be applied to any stream of bytes: under the hood it remembers some of the previously seen content and attempts to find and replace duplicate data fragments in an efficient way - for the curious, great low-level explanation of GZIP. However, in practice, GZIP performs best on text-based content, often achieving compression rates of as high as 70-90% for larger files, whereas running GZIP on assets that are already compressed via alternative algorithms (e.g. most image formats) yields little to no improvement. It is also recommended that, GZIP compressed size should be <=33 KB

Inline CSS

Short Recommendation

Your site have 4 inline css.

  • Inline CSS
  • <strong style="font-size: 17px;"></strong>
  • <span style="color: #ffffff;" id = "Pourquoiutiliseruncomparateurdrsquoassurance"></span>
  • <span style="text-decoration: underline;"></span>
  • <p style="text-align: center;"></p>

Inline css is the css code reside in html page under html tags not in external .css file. Inline css increases the loading time of your webpage which is an important search engine ranking factor. So try not to use inline css.

Internal CSS

Short Recommendation

Your site have 10 internal css.

Internal css is the css codes which resides on html page inside style tag. Internal css is increases loading time since no page caching is possible for internal css. Try to put your css code in external file.

Micro Data Schema Test

Short Recommendation

Site failed micro data schema test.

Micro data is the information underlying a html string or paragraph. Consider a string “Avatar”, it could refer a profile picture on forum, blog or social networking site or may it refer to a highly successful 3D movie. Microdot is used to specify the reference or underlying information about an html string. Microdata gives chances to search engine and other application for better understanding of your content and better display significantly on search result.
Learn more

IP & DNS Report

IPv4

87.98.161.117

IPv6

Not Compatiable
DNS Report
SL Host Class TTL Type PRI Target IP
1www.aydoo.frIN3600A87.98.161.117

IP Canonicalization Test

Short Recommendation

Site failed IP canonicalization test.

If multiple domain name is registered under single ip address the search bots can label other sites as duplicates of one sites. This is ip canonicalization. Little bit like url canonicalizaion. To solve this use redirects.
Learn more

URL Canonicalization Test

Short Recommendation

Site passed URL canonicalization test.

Canonical tags make your all urls those lead to a single address or webpage into a single url. Like :
<link rel="canonical" href="https://mywebsite.com/home" />
<link rel="canonical" href="https://www.mywebsite.com/home" />
Both refer to the link mywebsite.com/home. So all the different url with same content or page now comes under the link or url mywebsite.com/home. Which will boost up your search engine ranking by eliminating content duplication. Use canonical tag for all the same urls.
Learn more

cURL Response

  • url : https://www.aydoo.fr/
  • content type : text/html; charset=UTF-8
  • http code : 200
  • header size : 472
  • request size : 0
  • filetime : -1
  • ssl verify result : 20
  • redirect count : 0
  • total time : 2.404932
  • namelookup time : 0.503141
  • connect time : 0.655085
  • pretransfer time : 0.808793
  • size upload : 0
  • size download : 260865
  • speed download : 108470
  • speed upload : 0
  • download content length : 260865
  • upload content length : 0
  • starttransfer time : 1.799796
  • redirect time : 0
  • redirect url :
  • primary ip : 87.98.161.117
  • certinfo :
  • primary port : 443
  • local ip : 162.0.220.101
  • local port : 37132
  • http version : 3
  • protocol : 2
  • ssl verifyresult : 0
  • scheme : HTTPS
  • appconnect time us : 808627
  • connect time us : 655085
  • namelookup time us : 503141
  • pretransfer time us : 808793
  • redirect time us : 0
  • starttransfer time us : 1799796
  • total time us : 2404932

PageSpeed Insights (Mobile)

Performance

  • Emulated Form Factor Mobile
  • Locale En-US
  • Category Performance
  • Field Data
  • First Contentful Paint (FCP)
  • FCP Metric Category
  • First Input Delay (FID)
  • FID Metric Category
  • Overall Category
  • Origin Summary
  • First Contentful Paint (FCP)
  • FCP Metric Category
  • First Input Delay (FID)
  • FID Metric Category
  • Overall Category
  • Lab Data
  • First Contentful Paint 3.5 s
  • First Meaningful Paint 3.5 s
  • Speed Index 4.4 s
  • First CPU Idle
  • Time to Interactive 4.3 s
  • Max Potential First Input Delay 130 ms

Audit Data

Resources Summary

Aggregates all network requests and groups them by typeLearn More

Eliminate render-blocking resources

Potential savings of 0 ms

Resources are blocking the first paint of your page. Consider delivering critical JS/CSS inline and deferring all non-critical JS/styles. Learn More

Efficiently encode images

Optimized images load faster and consume less cellular data. Learn More

Enable text compression

Text-based resources should be served with compression (gzip, deflate or brotli) to minimize total network bytes. Learn More

Serve static assets with an efficient cache policy

20 resources found

A long cache lifetime can speed up repeat visits to your page. Learn More

Minimize third-party usage

Third-party code blocked the main thread for 0 ms

Third-party code can significantly impact load performance. Limit the number of redundant third-party providers and try to load third-party code after your page has primarily finished loading. Learn More

Total Blocking Time

60 ms

Sum of all time periods between FCP and Time to Interactive, when task length exceeded 50ms, expressed in milliseconds.

JavaScript execution time

0.5 s

Consider reducing the time spent parsing, compiling, and executing JS. You may find delivering smaller JS payloads helps with this. Learn More

Defer offscreen images

Potential savings of 67 KiB

Consider lazy-loading offscreen and hidden images after all critical resources have finished loading to lower time to interactive. Learn More

Server Backend Latencies

0 ms

Server latencies can impact web performance. If the server latency of an origin is high, it's an indication the server is overloaded or has poor backend performance. Learn More

Properly size images

Potential savings of 18 KiB

Serve images that are appropriately-sized to save cellular data and improve load time. Learn More

Reduce unused CSS

Potential savings of 13 KiB

Reduce unused rules from stylesheets and defer CSS not used for above-the-fold content to decrease bytes consumed by network activity. Learn More

Avoids enormous network payloads

Total size was 869 KiB

Large network payloads cost users real money and are highly correlated with long load times. Learn More

Minimize main-thread work

2.1 s

Consider reducing the time spent parsing, compiling and executing JS. You may find delivering smaller JS payloads helps with this. Learn More

Avoid chaining critical requests

30 chains found

The Critical Request Chains below show you what resources are loaded with a high priority. Consider reducing the length of chains, reducing the download size of resources, or deferring the download of unnecessary resources to improve page load. Learn More

Avoids an excessive DOM size

226 elements

A large DOM will increase memory usage, cause longer Learn More

Avoid multiple page redirects

Redirects introduce additional delays before the page can be loaded. Learn More

Minify JavaScript

Minifying JavaScript files can reduce payload sizes and script parse time. Learn More

User Timing marks and measures

Consider instrumenting your app with the User Timing API to measure your app's real-world performance during key user experiences. Learn More

Network Round Trip Times

0 ms

Network round trip times (RTT) have a large impact on performance. If the RTT to an origin is high, it's an indication that servers closer to the user could improve performance. Learn More

PageSpeed Insights (Desktop)

Performance

  • Emulated Form Factor Desktop
  • Locale En-US
  • Category Performance
  • Field Data
  • First Contentful Paint (FCP)
  • FCP Metric Category
  • First Input Delay (FID)
  • FID Metric Category
  • Overall Category
  • Origin Summary
  • First Contentful Paint (FCP)
  • FCP Metric Category
  • First Input Delay (FID)
  • FID Metric Category
  • Overall Category
  • Lab Data
  • First Contentful Paint 0.8 s
  • First Meaningful Paint 0.8 s
  • Speed Index 1.5 s
  • First CPU Idle
  • Time to Interactive 0.8 s
  • Max Potential First Input Delay 30 ms

Audit Data

Resources Summary

Aggregates all network requests and groups them by typeLearn More

Eliminate render-blocking resources

Potential savings of 0 ms

Resources are blocking the first paint of your page. Consider delivering critical JS/CSS inline and deferring all non-critical JS/styles. Learn More

Efficiently encode images

Optimized images load faster and consume less cellular data. Learn More

Enable text compression

Text-based resources should be served with compression (gzip, deflate or brotli) to minimize total network bytes. Learn More

Serve static assets with an efficient cache policy

20 resources found

A long cache lifetime can speed up repeat visits to your page. Learn More

Minimize third-party usage

Third-party code blocked the main thread for 0 ms

Third-party code can significantly impact load performance. Limit the number of redundant third-party providers and try to load third-party code after your page has primarily finished loading. Learn More

Total Blocking Time

0 ms

Sum of all time periods between FCP and Time to Interactive, when task length exceeded 50ms, expressed in milliseconds.

JavaScript execution time

0.0 s

Consider reducing the time spent parsing, compiling, and executing JS. You may find delivering smaller JS payloads helps with this. Learn More

Defer offscreen images

Consider lazy-loading offscreen and hidden images after all critical resources have finished loading to lower time to interactive. Learn More

Server Backend Latencies

0 ms

Server latencies can impact web performance. If the server latency of an origin is high, it's an indication the server is overloaded or has poor backend performance. Learn More

Properly size images

Potential savings of 64 KiB

Serve images that are appropriately-sized to save cellular data and improve load time. Learn More

Reduce unused CSS

Potential savings of 13 KiB

Reduce unused rules from stylesheets and defer CSS not used for above-the-fold content to decrease bytes consumed by network activity. Learn More

Avoids enormous network payloads

Total size was 1,109 KiB

Large network payloads cost users real money and are highly correlated with long load times. Learn More

Minimizes main-thread work

0.4 s

Consider reducing the time spent parsing, compiling and executing JS. You may find delivering smaller JS payloads helps with this. Learn More

Avoid chaining critical requests

31 chains found

The Critical Request Chains below show you what resources are loaded with a high priority. Consider reducing the length of chains, reducing the download size of resources, or deferring the download of unnecessary resources to improve page load. Learn More

Avoids an excessive DOM size

226 elements

A large DOM will increase memory usage, cause longer Learn More

Avoid multiple page redirects

Redirects introduce additional delays before the page can be loaded. Learn More

Minify JavaScript

Minifying JavaScript files can reduce payload sizes and script parse time. Learn More

User Timing marks and measures

Consider instrumenting your app with the User Timing API to measure your app's real-world performance during key user experiences. Learn More

Network Round Trip Times

0 ms

Network round trip times (RTT) have a large impact on performance. If the RTT to an origin is high, it's an indication that servers closer to the user could improve performance. Learn More