few times to today's Google are essential albums I was going out we have a small audience here live and not at the moment so if you're watching this job if you want to jump in the future while make sure you check out the link I have time to start with with you to your mind we need to discuss pilots question I asked in check well as I help manage site I put in there to make the maiden ODI ODI directory for some reason when you get a law firm and for some reason when you when it shows are things that shows her title next to the DMZ title company name and I'm wondering just takes time to go away just did it in about six months but it should be a lot faster than six months the road last minute I know ODP might be interesting to see me that example is a very fair chaps then I can take a look at that hour towards their alright take a look at the time to go through the questions here there is anything that outside but you want to ask for more details on feel free to jump me let's talk when we go into the internal linking negative economic impact is implemented are tracking your else should we use only originally arose early signs so I think within the website I tried to use your house and make sure that you're using the ones you want to have as economical but something we do look at but its external people are linking two pages within your web site and they have tried to damage or injuries there and you use around Canonical's clean that up on your side I just wouldn't do that when someone goes to your website and your link to your page on shoes and then that makes it really hard for us to confirm that this is really figure out that you actually do what we do take a look at the canonical but we also take them are you within USA and if that doesn't mind then know which version to actual thing to keep in mind here is the content of these pages is really identical so you want your own with the tracking parameter 1 you're out just clean then generally speaking what just happened just euros Jones we're redoing drop down navigation reconquer signs can you offer any advice on best practices we need to make sure that people can crowd goes to your website through the navigation tool box with the navigation and I'm with your parts of course that doesn't mean that have to be interlinked with all other that usually doesn't make sense so I try to focus on what works for the user and generally speaking through you will be able to do that and we will be able to understand the context of your individual pages are there is only one detail pages linked from this page that category page and that gives us a little bit more out along with it we are aged websites and console the significance is full 94 the keyword higher and 40% of the word tool we can't seem to run for this term and gets everything else seems ok so the council there a keyword tool when you see how these words doctor will not what we think is for your site for search for radio it's really just what we call your website kind of what we're seeing there so that's something where I use that one has a weight that we can find words and they're not totally off base in there then going out and so something out so if your site is about to hear and you suddenly see pharmaceutical keywords there and that's pretty good sign that something if you go into that too will it be worth it for tools are higher than doubts that you have a deja as we can in the ranking part hehe so we have information bad words on a page on your site it's really hard for me to give advice on which direction you should be heading to to rain so what I'd recommend there is going to be where you feel comfortable and just didn't put some other people who have and maybe there is a simple things you do to me it is just a matter of finding out your side actual they recommend other SEO agency to remove our internal articles on our site as Google can see into has over optimization from what I've read this might be the case I will position on this problem with internal links from our e-commerce sites and sometimes if you're an expert on the topic something products so long that then you'll writes articles as well give more insight shows us the pros and cons of the variations and that's something that something does make sense to link to your product pages to the rest of your side so that's not something I see is overly a reasonable way that you're not leaving everybody we're two different page on your site but rather they will products imported products and this is something we offer to something maybe someone else offers or this is a link to the manufacturer's right is it ok to different each time out and different content on mobile browsers aging on safari is generally speaking I tried to minimize this is kind of different browsers it's important for us primary past that you are you want to handle so we see it a lot with it as agent age and maybe the mobile version page is structured if the reason for that page that's kind of them that treaty party might run into there is a lot harder to tell you about actually see each individual users if every browser sees a slightly different variations so I have the means and hands test they gots a lot harder also cashing tricky year is your survey of 4,000 and that means that no cash networks I between your site and the user and technical things that you need to watch out for with regards to making sure that it's actually usable in general is fine from our point of view we call this dynamic service for mobile we have a health center article or developer site article on dynamics gives you more information what you want me to do on your server to kind of let us know if JavaScript I'm haters we 30 wines by superimposing localized h1 h2 what kind of look like so in general you can use JavaScript to personalize your patience and this is a way of personalization that are used to think about as good as well and we'll see what's your sign with jobs there as well so if for example you shows that things for users and yeah then it's indexed pages like your homepage then these as welcome users from California and its own searches on the internet will see that so that's something to keep in mind that I just begun using jobs that will be able to see that index is there any way to check the number of outs creams indexed in Google we've been using to the west at the moments there isn't that correct way to do that you do that search analytics information on so they can see which individual pages were shot and that so many unique pages were there for ages they were at least but we don't have anything like it charge like we have right sites for services coming to the UK so I don't know once you get home services are at something new that a lot recently not really sure in general what happens with international launches we try something new out we see how it works and it works really hard to spread out across India on how easy that's doing some of the projects that we were gone legal issues around on policy issues around where we can easily take something that we're doing 100 things wire if there is something service launched the other country then actually works and if we see that works really well trying to spread out into the countries where it takes sometimes that's something that's lower there's no need to add my own code on the last legal docs not clickable in the structure I took a quick look at the documentation or chrome these and they do the last know so from my point of view we make sense by the ages and see what you find in storage and how that actually like this is kind of a bitch leading to your pages it's not something that would have been classified pages which generally issue of two or three things while herds 0 this things that other time reason being that few pages with low-quality so in general and you recognize the search result page 10 results and that's something I recommend from but no index on there if possible so that we can crawl back we recognize that 0 search results page and we won't necessarily because I was really bad user experience on the searching for holiness and a result of your site so we don't actually a video and that's that's a really bad users and that's something that we do try to recognize if you can help us letting us know that this is something that you should just put it no index on there so that we don't try to show it to question that sure is on my side you know we've been trying everything so we tried was to make sure that there was no in ages where it may be used only once or twice that I have big page with 20 are there you know so we could do to make sure that we didn't have any contact now or contact lost as I recommend using the air in there sure you can do that yeah I believe that something you make a decision and you know the other pages that have revealed ok a lot of content and some of those pages and pages have you been to say well I was set threshold somewhere in my CMIS that if I have less than 500 polls show and if I have more and that's really up to you I think from our point of view are quality programs do look the website so they look at everything that's index and over again that we might well maybe this overall is kind of over if you can tell us that this lower quality content should and shouldn't be taking that we can really focus on the high quality stuff that you are we have a link from the site's Google as a possible if you look at those pages from a house and don't think we take that into so I imagine so in general when when we see pages that partly for the website and they usually there so we don't go out in your house look at things like analytics as he tried those out as well that's something why are we found the link to that someone might be that there was a broken link in your web site maybe there's the boys that was it for him somewhere in someone that link afterward but we ended up at that time all of these waves are different possibilities for us actually paid on your website and try to text and there's something that you don't want to have index it all then I wouldn't recommend for lying just not linked website sooner or later someone is going to stumble and they also mounted on 14 hour or so tomorrow as well confidential information to make sure it protected properly information you just don't want to make sure it has no information that's problematic for example if I survived more than making block that we don't even try I would rely on something not to block that from me is the user agent for smartphone it does difference yes we have different Google accusations and we have a normal desktop user agent or whatever it is and we have that mobile smartphone version which is has recently been updated to the new iPhone 6 gonna do I found and we have the future versions which are different depending on the type of them so I check out the house and are involved in his Google considering structured out of the rankings in general not I guess the General Lee's room opened there in the sense that we tried to use structured as a way to understand more about this page and we try to use that for rich snippets for example in which the business so that leads to two more visible search results tend to clip your search of the more you know it's ranked lower than this is traffic based on wall behind him I said look good then has more information about what you're so that's something we're with necessarily write your site visitors because they recognize type of content another what we do see is off and implement structure they tend to use a very clean HTML format for maybe a messy HDMI all they had to really clean and it's not moving to clear layout but it does make it a lot easier for us understand the context you're providing under so the structure that houses a little bit joggers and while you're talking about cold storage and not the car or jarred or the car and things help us to better understand and I think over time it's something that might be flowing through Colorado then we see are well we have these pages are marked down with structure so probably they're useful in that regard we don't have a choice that I think in the long run it doesn't make sense to use structured data where where you see no reason why I wouldn't assume that using structured data market will jump up and rankings come out so we tried to distinguish from just like WoW and it's actually good because he doesn't want something that's just because it's done taken well by finding them proved I find this could be the possible reason so having the markup doesn't mean that will show the search box help us to understand each other the search box we should use the version that you're using so not that kind of traders or resign but rather one we show up so that required by law to the previous question was structured at a wedding this mario has necessarily read that will change the way we Rangers I we show this if we didn't think that was reasonable for you this regard are not very high that you know all the show is there I wouldn't necessarily say it has to be recognized for example that people are trying to find something that your website that he accepts it someone is searching for nasa chances are they're not looking for the NASA homepage is something that's wanted finds that it would probably make sense of this how many months or years so you should we keep a 301 redirect or can we move after some time we'll go with Russia and the only route so theoretically if you've been hit back forever practically that's not reasonable something like you're moving to me like you know me kind of come out in practice what happens is if we recognize that this is a permanent redirect try to keep that in mind for the future as well if you enjoy our site and we've been able to recognize a move which might make you happy and in some point you can take that redirect thing to keep in mind there is there's still links to go over everything to those provisions that might still show the old versions if you remove that one reader so if you're doing is you're changing your URLs you really need to make sure to follow the guy that we have a house which also includes and reaching out to the new versions don't end up getting lost in the end because otherwise I may be young and that's really about user experience and you'd like to have at least associate with your actual content not with 404 page our domain for so long so I guess going back to the question I definitely change for at least a year I try to keep that longer as well beyond reasonable your situation is good practice to know how to add related pages is it ok to use the next on those pages so the typing just beginning to search results it's really up to you what we want to do that and you feel like you have to be added pages you provide a lot of power if you feel that they don't provide a lot of fines if you do put in our index is it will make sense my case so I choose between index or using drugs and make that decision so I thoughts on getting search engines and didn't have a test the moment friendly it comes up that's the Euro also wanna go to my site mobile it's all good there are no problems so hard to say because they could be different things the mobile-friendly test using our Google box to get to the content in some situations that's not directly possible for example if your site is overloaded where we think we were more pages from this era of all will be able to handle that require overload it alot alot of server errors and we might someone is requesting this page but we know that the center area tends to be limits so we want parents is that of causing problems so that might be one option that's hiring there might be something I'm blocking our home and that's something that you probably want to resolve this so this is your Google where you can also choose between in the smartphone Google got to kind of double check what exactly is happening on your site when those forms so we we do that in some very specific situations where we think there's little psycho search form and we're not getting all of the pieces of mail on this website than those kind of situation maybe we'll try to get some words from things but that's really rare situation usually not the case that we go to the site owners of form here and there is a generation in practice those forms only disability has surged forearms sometimes do on the weekend otherwise she doesn't have any navigation if we see that essentially made its form and we suspect that there's a lot of really good company that's one situation where we don't decided about this topic will try some more links to actual content content available in India currently hold off a bit on the inside because I know they use our work there I was not I that's something we're not that not available in India I more efficient way to bed and see how that settles down in bed you the final recommendations rather than to run off and try to implement it as it is now using rated counted using all of that can be used in radio contact really up to you you're in the massive you know where you're not that we would argue that it's just up to you 244 changing your route structure should all pages and they're not tomorrow so marking them as they are sign it's high in general do recall these pages nothing you need to do to have a problem but you can use the index has changed little bit faster you can also this is a new change day and then what is your overall change recently we didn't know about that we should check the meeting tonight to begin with the passes as another way in the worst case you just leave it like that you can't do them 9021 website is about as reduced dramatically nor should I upload the northerners and I are with you or so in this is so if you want those like and that's something that was just a pact to not something where you would see an immediate change in heart so that's something we're probably see through changes happening in this inquiry time but they're not with regards to their to changes that we recently had their on the one hand we are one day numbers dropped and on the other hand we didn't change something on our site you try to call tomorrow and for some signs those results in a visible change numbers that we show that it's not how it changed that we actually shown search so that way to trying to show the more accurate balanced in seven I suspect in the house there will change at some future again when we make additional changed your accounts are currently struggling with our clients and give me some pointers right direction it's really hard to say like what should be done with the websites on the fly and his life and I recommend their help and advice from peers if there's anything that could be improved their or also where you can explain a little bit better what specifically you're having trouble with where you're seeing problems in the search results and that can help people to come always great time playing right now lives with Mikey using million ideas about bad things I shouldn't be alive anymore other would you recognize a good practice is wise the will still count in length from websites that are so if you're not on links that don't actually is that he should call on search because we take this page and page doesn't exist anymore and we can through the use of those things that matter so if you think that doesn't exist doesn't change anything we all know that its existing route into his availability player so he's begun the page doesn't exist anymore doesnt mean that things are so hard cash you might use rights you know the difference i sites seem to look at this and this general PDF I home and I was hoping someone else might choose other locations while showing search been do there is a lazy and they want to take that into account when we would you rather show in practice probably not make much difference in a case like this and people go to this PDS matter where it is currently also each one is controlled through its as versus h1 tag without CSS which one is better and one helpful and ricky is really a question from our side it's not something that would have if using 812 then doesn't need one but it's not we would say well I'm was stronger than normal eating we really having to do we need to know how much to so we in general so that's the reason why but the alternate canonical or not you want to associate with them and use that as the 124 is there is going on your show a number of property to the search result pages I don't know exactly what to be what property and I don't think we have anything is it like so for things we show that such as something that we have specified in the way back so I double check what we have which of these make sense for your website thing to keep in mind is there lots and lots of ways that structured markets you ages using and not all of them to use Google to be shown has reached a certain level so that's something if you have a limited amount of time and you want to add something that's visible that I focus on that which is shown with the rich snippets for example or which is used for other search features and focus on that kind of first is that a good practice 2468 no index nofollow pages doesn't have any effect in a case like that because so taking it back with a beatnik senior taking an Android app is a this page is equivalent to this page on my website and we can have a connection between the website know how this is all so if you think that website actually exist individual or the web page as a no index for example then that's something one of my connection doesn't really work because if we don't show the website and search and we won't show that doing something that doesn't really happen it's not that I would causing problems it just doesn't give you any kind of advantage but I actually do coach only such as Google vs ashtec's pager which one is better energy conclude that Lisa being read by Google both don't show this stuff so the cash surgeon-general reach age if your server or doing something else then that's something which might not be reflected in the cash for a new take into account when you render of age with such as Google and when we read your texts so for indexing we focus more on the version shown you there or the cascades there's a reason why we would show HTML page and not try to render it only got page tho time doesn't increase for users never visible to users that I think wow there we generally recommend not close to Google because it really hard to cause problems with the west so if at all possible I'd really recommend ages show this market users as they would show that's probably not the best use art / product but what I do that there's a Google+ community structure or helpful to get some people around how they used as a very had only on optimized pages or is it required to have on the front pages and the mobile ages both so very happy tells us that content is different depending on the user agent which means on one year of it might be that we see no contact with the best will always use the mobile content and mary had a tells us there that we shouldn't just eight runs a business so if you're doing any harm to use every version so on this you're out regardless of which was your show on that is different depending on the other hand if you have separate modality that you don't need to use a very happy because every time someone the agency age every time some of those at the moment aides as your mobile age it's not very from that point of view of you probably don't need the very same your house so I that's the same you're out and using their area for all those versions require strong word I guess from the pages in the way that a user so on the one hand it helps us to use the example pages because we look at the CSS and Javascript it is on a mobile browser it doesn't do anything crazy to kind of overflow the mobile browser page that doesn't work by mobile browser so if we can't look at the job site that uses Javascript for you mark so they can pull in javascript and if we can access that Jobs says the server response was that we can on the Droid X so you might be a lot of relief with JavaScript you do that in a way that works well for you but if we can process the job as we basically have to be there is no comment on the HSE get the full context for your site to understand how would you rank so it's certainly not be able to see all of this but it does help us a lot to really understand how we should be writing this age where we should be showing it which then we should so that we show the probably the surface play a major role for Google and other search is understand your content also I like never before drapery we would save this page uses market therefore must be good but it does help us understand the context lot better understand how things fit in and what these were done this age actually need so one example there is for example if you ever restaurants and hours on your way age and they're just like in Iraq HTML table i on the side on your webpage really hard for us actually does that mean this restaurant is always open there is a table so they things and he was used that started at a market for over two hours we will be able to really see clear answer there when we look at this age once we see your opening hours we've had this is a type of business is a restaurant here at the opening hours when someone is searching for this location and we know that if we can get to know your business that has these reviews so that does help us a lot but it's not the case just like adding to your site automatically makes i dont description of people ages but the corresponding original page contains a novel writing to the original page the original page 301 302 redirect it to do I this is a complicated though I probably run command post to the web just so that you can explain the situation a little bit better and bigger price of these years or on someone else's that because I had a recommendation to do it like this giving some more context around this question that makes it a lot easier I'm not sure what exactly the situation is descriptions 19 had been cashed in Google any case I want to delete that from Google search for please see how can I do that so ideally page from from from the server where you put back and once you've done that use the euro removal tools and say well this is a year out that I removed these days that in 2009 with a result we should be able to just so special you better be removed from the internet that's a pretty straightforward process the Euro removal tool will help you to get if you've just changed on that page also using your specific were raised with the move and we will double check if that's the case and then try to show that on the other hand it is eight assists on the internet and it's still a public page that has really hard for this is a well this person wants a pager room put that on the internet what should we do in that situation so I really try to work together with the president but the content and disease that can be removed so that you can do the normal for discussions between two or three countries for example on google.com we thank you page and for Google Doc you relaxing way down in the third page so is there any reason begin specificity facilitate that saying he would have comedy you can be used to that yes we have but from my understanding you driving to use for just one location locations so do targeting a single country where you can see my website for this part of my website is my husband approves in this country and that if someone is searching for something local and you're saying this is something that's locally run to the user there but it's not the case that you can say well as relevant in these countries and it should be seen as a no vote on that all of these countries because he's actually different countries it's locally but it's also everywhere so it's kind of completing their to my country is there is a different competition there as well maybe you're into really well globally is kind of resulting in this change just because you're very good hopefully that will be good individual country so for example if you take a really strong brand I don't know I don't know maybe coca-cola's something like that which is really popular operates the country's most popular choice for soccer in so you might see individual countries having different rankings ok to use Joe time to let me be four landing pages yeah I mean what do you are continuing generic top-level domains can say well this directory I have affected this country and how much confidence in that subdirectory up to you that's maybe I guess like you know I'm gonna be like one could be that you have everything there are you say well users from this country they have like local delivery options there is another option and you could do if we're showing the wrong version is to use a href landmark so if for example in Australia we show the version of your age or the UK in with HIV versions of this one is for Australian radio work to see you can just change ranking person this more questions from you guys anything question on how to help the question to the Hangout you a so they've been staged is a Q&A link more questions there I don't think I have time for this one but the next thing out later today and you can definitely at the questions all right let's take a break here and I'll try to figure out why my laptop but I thank you all for coming I wish you all a great weekend and John Welcome everyone to today's Google Webmaster Central Office hours. My name is John Mueller. I'm a webmaster trends analysts here at Google in Switzerland. And part of what I do is talk with webmasters and publishers like the people here in the Hangout or people that submitted lots of questions already. As always, when we get started, is there anyone new here from the Hangout live that wants to get started with some questions. Yeah, hi, John. I don't know if you guys can hear me. Yes. I hope you do. OK, cool. Hi, first of all, thank you for inviting me. I'm really happy to be here. I always watch the Hangouts. It's the first time I'm actually on one. I got a question regarding something that happened to one of the websites I take care of. Part of it got de-indexed due to having hreflang tags for different languages but also canonical tags in the language folders. So every language would have their own canonical tags for that language and also the hreflang tags. And I'm wondering why it got de-indexed. Like two-thirds of it just disappeared from Google. Now, the weird part is maybe the canonicals were conflicting in signals or something. But whenever I would do a site search for the pages that got de-indexed, I would not get the result. But then if I would search for certain keywords for those pages, they would still be ranking. So if they're not indexed, how can they still be ranking? I'd probably have to look at the specific URLs. If you have, I don't know, a forum thread or somewhere where you're discussing this already, I can take a look at that. I've mentioned it on the Google forum, and that's how we found some of the issues being related to the canonical tags. But the problem is my client doesn't want me to make this public. So can I send you a private message with a URL? Would that work? Sure, you can send that to me. I don't know what I can reply directly. For a lot of these things, I can pass them on internally. But I can take a look to see maybe there something missing in the forum thread as well. In general, the hreflang crosslinks they wouldn't remove a site from the index. It wouldn't remove the URLs from being indexed. The canonical, you could theoretically do that in a way that could result in some of these URLs being removed from index. But that wouldn't be if each language had their own canonical. That would be, for example, if all the canonical tags were pointing at the home page. And that's something where we might think, oh, well, all of these things that are the same as the home page will just do the home page and drop everything else. But that doesn't sound quite like what you saw. Well, we did manage to exclude everything else. And there was one weird thing. The cache one of the pages was showing the URL for a different language. So it was cast as the other language that was still showing as being indexed. So it was some of the language versions were removed or some of the pages completely? Yeah, two out of three languages were removed. Now the other issue is it's not very different languages. So it's not English, French, Italian. It was variations of the same language because of different regions that speak the same language. So it was German for Germany and German for Austria, for example. OK. Sometimes what can happen if we don't have a clear hreflang annotation there is that we'll assume that these pages are the same, and we'll fold them into one. So you'll see that when you do like a site query. We'll just show one of those pages there. If you do info query for maybe the German and Austria version, then we'll show that URL in the search results for the info query. But we won't show it for a site query, for example. So that might be what was happening there. But if you have cleaned up the hreflang markup, then that shouldn't be happening there anymore. Well, actually, we've removed the canonical tags from the other languages and left the main language with canonical tags. We still have the hreflangs, and those are implemented correctly. So that is not a problem. The question is how can I make this faster? I've fetched it with Google. I've resubmitted site maps. I just can wait no longer. Some of these things just take time. It's hard to say what you'd really need to do there. But I think one aspect would be really important is really to make sure that the canonical tag isn't across the different language versions. So your German for Germany should have the canonical pointing to itself, and the German for Austria should have the canonical pointing to itself. It shouldn't be that the German for Austria version has canonical pointing to the German for Germany version. This is the situation-- every language folder has a canonical pointing to self and not to a different language. That sounds good. But I guess I'd really need to take a look at the example then to see what the current status is and what is holding it up. Or maybe there's still something that needs to be done. Maybe it's just a matter of time. Yeah, I'll understand that. Well, I'll try to message you. Maybe it gets to you. OK, sure. Thank you very much. All right, more questions from some new faces in the Hangout? Anything specific? As always we'll just start with submissions. I'm not a new face, but could I ask a question or should I wait till later? Sure, go for it. All right, so back on August 23rd there was some type of down trend in pretty much every index status report I saw in the Google Search Console. And I heard from German webmasters that you said in German Hangout that the normal new is a normal new level, and then I asked PR at Google, and they said, yeah, everything you see now terms of the lower number is a more accurate count. And then I saw a bounce back up on the 30th, and then again it's still at the pre-normal levels today when the index status report updates began today. So what was that down dip that you said was-- or that Google said was the new normal? I think that was a normal data glitch where something got messed up with counts. And we showed that in Search Console like that. So I think the difficulty there is the timing in that we had this weird data glitch that we noticed afterwards. And at the same time, or roughly the same time, we also made adjustments with what we would show in general. So it looked like some of these days were based on the adjustments that we did, the answer you got from PR, the thing that I said for the German Hangout. But at the same time, there was this glitch in the numbers there. So it's a bit of an odd coincidence. So both were correct? Yes. So when glitches happen, it doesn't affect ranking whatsoever, right? It shouldn't. So, I mean, theoretically, you could see that some kind of a glitch happens, and it does affect the search results. But in general, that's something you'd see right away in the search analytics report, in the traffic to your site-- all of that. So if you see a weird glitch like with the index status counts or you don't see any change in your search traffic, then probably that's just something that glitched with the counts there-- nothing crazy. Are there ever ranking glitches by accident? I am sure there are. I don't know of any offhand. But things can always go wrong, and we have a lot of the kind of backup systems and a lot of checks and balances to make sure that things don't break. But theoretically it's thinkable that something will go wrong at some point, and it'll be visible in search. Speaking of updates, is it possible I put a question-- like I put up a question. So I was just wondering, I guess I'll wait for it, but I just wanted to know something regarding updates. So I was wondering if could transition to that question. If not, I'll wait. All right, well then I'll get started on top and work my down. And if we don't make it to your question, feel free to ask at the end. Thanks. All right, "I'm working on a site that was owned by a company in the Netherlands. The domain has been sold to a US company. I set the target to the US in Search Console and other best practices. Yet the site only ranks in Google Netherlands and not in google.com." This sounds like either a situation where you need to give it a little bit more time to update all of the information that we have there or where it might make sense to send me that domain and a little bit more information about what happened there so that I can double check to make sure all of our systems internally are working as expected and that they are not stuck on the old stage for that site. But in general, this kind of a change where especially if you have a really well-established site in one country, and you say, well, all of this should be valid for a different company or a different country, and to propagate all of that information, that just takes a lot of time sometimes. So that's not something where you'll see a change, a flipping over happen within a couple of days. Sometimes that does take a month or so. "If your site is suffering from an algorithmic penalty, and you make positive changes, how long before you see the results? Do you just wait until the page is re-craweled? Are many of the ranking adjustments done in real time? Or is there also a time element?" So in general, we don't call these algorithmic penalties. These are essentially algorithmic adjustments the way our search works. This is the way that we do rankings. And a lot of these algorithms that are active in search results, they do everything from trying to recognize the content to figuring out if this is a high-quality content source or not of if there are any webspam issues that need to be taken into account. And that's not something where we'd say it's a penalty that we're demoting this site because of something they're doing wrong but rather algorithms are trying to figure out where is the appropriate place to put this site in the search results. So just a note on the side I guess there. When it comes to how long it takes to see these results for some of these changes, those changes are visible very quickly. For other algorithms, it does take a bit of time until everything bubbles up, until everything settles down at the right place and is able to take into account the new changes that you've made. For example, if you've made site-wide changes across a website, then we do need to take a look at your website in general and understand that this new site-wide information that we have from your website applies here, and that can take a couple of months for everything to be re-indexed, re-processed-- all of that. On the other hand, if you change individual pages, you change, for example, a title on a page, then that's something that can be reflected in search within maybe a day or so. So it's not that there's one clear line where we say, well, you made this change now and this will be always visible exactly a week later or two weeks later. That's not the case. "Is it safe to reuse content from pages which have either 301s or 404? We're redoing our site structure, and we have a lot of useful content we want to keep. But we obviously don't want any duplication penalty if it's on more than one page." So first off, there is no duplicate content penalty, so you don't have to worry about that. That's one less worry, I guess. In general, if you're changing your site structure, you're welcome to reuse your site's content. That's the perfectly natural thing to do. If you're moving URLs, if you're removing URLs, if you're folding pages together, all of that is a good reason to reuse that content and take that into account. So by all means, you reuse the content that's good, make sure that your new site the best collection of all of your content, and then you should be good to go. "About six weeks ago we moved our site to a new domain together with a redesign, responsive design, HTTPS migration with 301 all new URLs and did a site move in Search Console. The week after, we lost 70% of the organic traffic and still aren't ranking. What's up?" So I probably also need to have this site to kind of see what exactly is happening there. In general, these kind of things should be less problematic. Sometimes we see that there are technical issues that are happening where maybe we can't follow the redirects properly. Maybe something is blocked by robots.txt. Maybe the server is blocking us somehow. All of these things can happen. So these are things that you usually find if you post, for example, in some webmaster help forum either in ours or in a general webmaster community where other people have gone through these steps as well. You're also welcome to send that to me, and I can take a look with the team to see if there's anything abnormal that's kind of holding the site back or that's taking time to be re-processed there. But also maybe in Webmaster Tools, they haven't done the site move correctly, right? The question says that they did that-- the site move in Search Console. It does check for some of the default settings there so that the home page is redirected that you have both of them verified, for example. But it doesn't check everything that's across the whole website. So maybe there's something that's still kind of stuck somewhere. By the way, John, they use the Webmaster Tools, or Search Console, to see crawl errors and maybe, if I remember correctly, it also shows the redirects from the old site, and in the new account, it shows 404 errors and links from I'm not sure it does that-- if it also shows the redirects as a source for 404 errors. I don't know. I think it does. It's possible. There's some constellations where I think that happens. But I don't think it's by design. It's not meant to be a confirmation that you're doing a site move properly. But I think you might see some effects like that if you know exactly what to look for. But I wouldn't use that as a sign that the site move is working as expected. But like I said, you're welcome to send me the link, and I can take a quick look to see if there's something obvious that I can let you know about or if there's something stuck on our side. And sometimes these things do take quite a while, and if you move, for example, to an old domain that had a lot of problems in the past, and that's something that will be mixed in with your website as well. So, for example, if you move to a domain that's been used for years for really black hat spam tactics and has all of these spammy backlinks attached to it, then that's something where you inheriting at least some of that by doing a move to a domain like that. So that's something you might want to watch out for when you pick a new domain to make sure that things are really aligned properly not that you're moving into essentially a broken house instead of a shiny, new office building. "We use rel next and rel previous for pagination, and our rel canonical tag to the first page as opposed to the View All page, as we feel this is better than showing 100-plus items on a very long page. Is this wrong? Will it hurt rankings?" So what will happen there in a case like that is you have all of these paginated pages, and each of these pages has a rel canonical pointing back to the first page. So what will happen is we'll drop those individual pages from that paginated sent, and we'll focus on that first page. And if on that first page all the content is available that you want to have shown in rankings, then that's fine. On the other hand, if those individual other pages are important to be shown in search, then maybe that's not exactly what you're trying to achieve. One example there might be if you have something like an e-commerce site where you have one page that lists all of your items. And from there, it links to the individual product pages. Then if we don't have any other form of navigation to those product pages, and we're only focusing on the first page of their paginated set that we might miss links to those individual product pages. On the other hand, if you have a normal navigation on the website, and you have this long paginated set, and you're only indexing the first page, then we'll still find those individual product pages anyways through the normal navigation. So that's something where it depends on your website what you want to achieve. Maybe it does make sense to focus on the first page. Maybe the additional content is something you do actually want to have indexed. "I have two identical sites, a .com and a .co.uk. The cache query shows the site. The cache for the .com shows the site from the .co.uk. And I wrote rel canonical tags to make it clear that these are two different sites. Do you have any tips to speed up the canonicalization process?" So what probably happened there is we saw that these pages were the same or they appeared to be the same to our algorithms. And we thought, well, the webmaster just accidentally put up two versions of the same content. We'll help the webmaster clean that up and fold everything into one clean version make it easier to index, , make it easier to show in search results. So that's probably what you're seeing there. Adding any kind of additional information to let us know that these are really two separate pages is a really good idea. Self-referential rel canonical is a great thing to do here. Maybe the hreflang between those pages. Maybe making sure that the content itself is also unique would also help us to figure out that these are actually unique pages that need to be indexed separately. So all of these things are things that you could do. I don't think there's any magic tricks to have this happens right away. So doing or implementing some of these things makes sense. Making sure that it can be crawled and indexed like that make sense. And as we re-index those pages, we'll probably recognize that they are separate that they shouldn't be folded together and show those separately in search. So one thing that you could do to jump start that process is to let us know that these pages change that we should re-crawl them, which you can do either through a sitemap, for example, if you give us a new date for the sitemap URL. Or you can use the Search Console tool, what is it? Fetch as Google and submit to indexing where we can take that URL and send it to indexing right away. But the first step is really to make sure that all of the signals that we collect about these pages tell us that these are unique pages that we need to process separately. "Our mobile site is showing a lot of content using click-to-expand links, but our desktop version doesn't. Is this going to cause any problems either from ranking or other points of view where doing this? So our mobile versions offers a better user experience?" In general, that's fine. If your mobile pages are equivalent to your desktop pages, they don't have to be exactly the same content. They don't have to have the exact same UI. If this user interface works well for your mobile web pages, then by all means, keep using that. Yeah? Go ahead. John, I had been trying for a long time to get connected, but was not able. Actually, John, I had one question regarding mobile sites. Sure. Sometimes I feel that in folder what are some important links in my desk site items showing. All of them are important in my mobile site as well. But in many forums in many places when I went to get I found people are suggesting there should not be as many links in mobile sites as in desktop sites. Is there any restriction for this from Google site or not? No. Not that I'm aware of at least. You can make your mobile site however you think makes sense. The important part for us that we can recognize that it's mobile-friendly. So kind of not blocking the resources that are shown there so that we can see the page as a mobile user might see it. But apart from that, we don't have any specific restrictions there. So if you think these links make sense for mobile users, by all means, keep them there. If you think that they're not so critical for mobile users, maybe it makes sense to fold them away in an expandable thing or maybe even to not show them at all for mobile users. That's essentially up to you. OK, thank you. Sure. There's this question about the video snippets. I don't have a newer answer for that, sorry. So I don't have any update there at the moment. "We load jQuery on all of our pages to minimize. The version we use is 96 kilobytes. This is 4 to 10 times the size of the HTML code for the page. Can downloading this amount of code relative to the size of the content dilute the relevance and quality of our content?" No. So one direct answer. Thanks, John. Also, just a quick follow up on that because the reason that question came up was from using the PageSpeed Insights and looking at their recommendations. Another thing that it always wants us to do is to minimize the HTML code and get rid of the extra tabs and lines-- blank lines and things. Is that's something that could have any effect on ranking? Or is it just-- That's essentially just tips to speed things up. So if you can reduce the size of your HTML pages and it works faster for users, then I guess this kind of indirect effect in that users like your site more if they stay on your site longer. They recommended it to others more. Maybe they'll stick around and actually buy something or convert if you have that set up. But essentially, there's no direct ranking change there. And for us, the page speed effect in ranking is more, I guess, kind of the line when a site is really really, really slow compared to a site that's kind of normal. So if you're in this kind of normal range where these pages load within, I don't know, 20, 30 seconds maximum, something like that. Then if you're tweaking things there for a 10th of second faster, then that's not going to have any effect at all in rankings. But it might, of course, have an effect on your users. And if users are happy, maybe they'll recommend you more, and that'll be this indirect circle back for your site. But it's not that if you can tweak a tenth of a second out of your site, you'll see any kind of a specific ranking change. Great. Thank you very much. The other thing to keep in mind with the jQuery-- there is if you're loading a version from a CDN somewhere, then often users will have that version already in their browser cache. So if you're using a standardized version that everyone is embedding, and you're embedding it from that same CDN as everyone is using, then sometimes users will be able to just take that from their cache. The version that they picked up somewhere else visiting another website can reuse that. So that's a small speed bonus you can do there, even if you have this relatively large jQuery JavaScript file. That's a good suggestion. Thanks, I think we'll do that. "We have a shop in Germany, Austria, Switzerland. We're using subdomains for each--" just try to mute you-- maybe not. OK. "We have a site in Germany, Austria, Switzerland. We're using subdomains for each region and follow the help of the Google Webmaster. Our rankings in the regions get mixed, for example, in Switzerland. The subdomain DE is showing. I also think we have a duplicate content problem. Should we use canonicals?" Using canonicals is always a good idea. It helps us to figure out which version to show in search results. So what you should be doing there is having the German for Germany version have a canonical to do the German for Germany version and the Swiss version to the Swiss version. So it shouldn't be canonicals across the versions, but rather within the individual versions. Using something like hreflang really helps us a lot to show the right version in the search results. It sounds like that's something that you should be doing there. I guess those are the main things there. The thing to keep in mind if you're using hreflang, you're seeing this kind of a mixup is probably we're not able to pick up hreflang information properly. So that's something where you should check Search Console as well in the international targeting section to see if there are any specific errors around hreflang that are shown there. Also, we have a German Hangout coming up in, I think, two days. So if you want to ask more questions about this in German, feel free to join that. And we might also have a bit more time there to look into your site specifically then. "We have dynamically generated search results pages that keep getting indexed Google over our actual content pages. How can we get these pages out of Google's index so Google will crawl more of the important pages?" I'd put a no-index on those pages. You can just put the no-index metatag on those pages. You could put noindex nofollow on those pages so that we don't get drawn into this maze of search results pages. Those are essentially the best recommendations there. "A competitor has bought over 100 URLs with the desired keyword optimized. Many websites under the guise of being different e-commerce stores link the sites to each other. They now hold the top two spots for this keyword. Is this black hat?" That does sound spammy, and maybe it's worth submitting a spam report for that. Usually, if you have a handful of sites, that's not something that we would really get involved in because sometimes it does make sense to have a small set of sites that are targeting something that's significantly different. But if you're talking about 100 different sites, then those start looking more like doorway sites, which is something that would be against our webmaster guidelines. "I recall you saying that if a 302 temporary redirect is in place long enough from one webpage to another, that Google will eventually treat it as a 301 permanent redirect. So after what time does that actually happen?" I don't think we have any specific time defined where we say after this many days or weeks or whatever we'll treat them as such. Essentially what we're trying to do is recognize that this is not a temporary situation, but rather that this is the permanent situation and that we should be treated as that appropriately. There's actually a pretty old blog post on this as well on Matt Cutts' blog about 301, 302 redirects and how they could be handled which kind of goes into this topic a bit as well. So that might be worth digging up I'm sure. I don't know. Maybe it's 2008, 2009 even, so pretty far back. But it's an interesting post with some of the difficulties around that. "Regarding schema.org, an attorney office has way over 20 attorneys. On the index page there is a slider showing all attorneys. Each one has its own index item type and several item props. Is this regarded as spammy?" That generally wouldn't be a problem for us in the sense that we don't use this kind of structured data markup for rankings. So it's not that there would be any kind of advantage from actually marking it up like this. It probably makes sense to have individual employee pages as well. If that's something that you want to highlight there, that might be something that we could show as rich snippets in the search results. But if this is a combined page with all of the employees and maybe linking to the individual detail pages, then marking that up doesn't really change anything from our point of view. So I wouldn't see that as being problematic. "My question is about x-default in hreflang. I have many languages on multiple domains, for example, .co.uk for English, IT for Italian, .com for German. Can I use an x-default to my .co.uk even though I already had it for my EN?" Yes, the x-default can point to an existing page. You can, on your site say, well, this is my page for English, and it's also the page I want to have shown for all other languages. You could also say, well, everyone speaks German. Therefore, my German page is my x-default page. That's really kind of up to you. And you can apply that on yet another level where you say, well, this is my English page for the UK and my English page in general and my x-default page. So you could take that essentially to three steps. Yeah, John, what if you don't have any x-defaults? How does Google decide if there is no hreflang matching the users browser settings or anything else? We try to do that the way we've always done that. So even if you don't have any hreflang markup, we try to show the appropriate version. And sometimes we pick a good version. Sometimes maybe we'll pick the wrong version. But it's essentially how we would handle any other kind of situation where there wasn't any explicit markup on this page is telling us which one to show. "My site uses session IDs in the URL. We have canonical tags pointing to the URLs without session IDs. Do you know of any issues that session IDs could cause in a URL even if there's a rel canonical pointing to the preferred URL?" So the rel canonical is a great way to handle this type of situation. Ideally, you'd avoid the session IDs completely so that we don't see them at all. But the canonical helps us to clean that up and to recognize that these session IDs are probably not that critical. Another thing you could do there is use the URL parameter handling settings in Search Console, which also lets you define this URL parameter is actually irrelevant for my website, which could be the parameter for your session ID if you have that as a clear parameter in the URL. So that's another way to kind of help clean that up. Ideally, like I mentioned, you'd try to avoid the situation at all so that when we crawl your website we only see links to the clean version of those URLs. We can crawl those clean versions of the URLs. We see them linked cleanly in the sitemap file. Maybe there are either redirects from alternate versions as in rel canonical there. All of information telling us to focus on the clean URLs. Sometimes you can't really avoid a situation with the session IDs. So just doing as much as you can to clean that up and letting us know about that is a good idea. The thing also to keep in mind here is if you explicitly search for both session IDs, we might still show them to you in search even if we follow the rel canonical. So what sometimes can happen is you do a site query for your site and you search for in URL with the session ID parameter, and we'll show that to you because we think, well, you're explicitly searching for this content. We know that this content exists here. Therefore, we'll try to show it to you. So that's something to keep in mind there if you explicitly search for it, it'll look like it's not working. If you search normally for content on your website, then it's probably end up working well. "Can you talk a bit about multilingual websites? What are the best practices? In WordPress, should content be on separate pages or on one? Can we use translation plugins?" We can hardly hear you. Let me go through this question really quickly, and towards the end, we'll have time for some more general Q&A. So talking a bit about multilingual sites is something that's probably worth a full hour Hangout, I guess, with all the different options. Some of the topics that you touched upon hear in this question are quick to answer. Should content be on separate pages or on one? It should always be on separate pages. We should always be able to use separate URLs to get to separate versions of your content. If you swap out the content dynamically on a single URL, what will happen is we'll index one version, and we'll never find the other version. So that's something where you're losing that content if you use the same URL as other things. "Shall we use translation plugins?" No, we would see that as auto-generated content. So I strongly avoid using plugins to create content, using plugins in addition on your pages to make it easier for users to see the translated versions as fine. "Do we need separate domains for each language?" No. Just separate URLs. Maybe you want to use separate domains or separate folders or directories or subdomains. But that's totally up to you how you want to handle different language versions. "We 301 redirect our product pages from domain.com/forsale to domain.com/sold subfolder. When a product is sold, will these URLs be omitted from the search results due to content duplication issues or will Google drop the old For Sale page eventually?" Yes, we'll drop the old pages eventually as we follow those redirects and see that the new URL is a new location, then we'll essentially take that into account. "I work for a company, and we're building a new website for our company domain. The owner wants to use the old website on a new domain for business purposes and A/B testing. What's the best practice? Do we noindex the old site?" I would-- Go for it. I would create a staging subdomain, and then you can do all the staging and give a password to whoever, and then you can test it that way. It's the best way. Yeah, I guess that makes sense if you're doing QA testing with an internal team. But if you want to do A/B testing for users, then you need to split that somehow. The best way would be to do a subdomain, I guess because then you can-- Well, yeah, or any kind of way to kind of separate the things. I think from the question one thing that sounds like what might be happening here is that the website is moving from one domain to another domain, but actually, the old domain is going to be reused for something else. And that's a pretty bad scenario in the sense that we can't do a full site move in a situation like that because we'll have the new domain with the old content, and we'll need to figure out what kind of connection it has with the old domain. So we can't forward all of the signals from the old domain to the new domain because there's something new actually posted on the old domain. So ideally what you would do in a situation like that is use a new domain for the new content and keep the old domain for the old content or do a clean move from the old domain to a new domain and post something new on a completely separate domain. So try to avoid the situation where you're moving the content from one domain to another, and at the same time, re-using the old domain as well because then we can't-- we can't see any redirects from the old content to the new content. We can't process the site move for that. We don't really know what to do with all of the signals that are collected with the old domain like how should they be spread out. Should they be attached to the old domain for the new content or should they be moved to the old content on the new domain. So try to keep it as simple as possible so that search engines don't have to second guess. That's probably, I guess, the simplest rule of SEO is make it easy for search engines to figure out what you're trying to do. "We have separate desktop and mobile web pages. We don't use a subdomain. Mobile pages are in the same directory using rel canonical on the mobile pages. Is it necessary to put our mobile pages in our site map?" No, that's not necessary. As long as we can see that connection between the desktop and the mobile pages with maybe the link rel alternate and the rel canonical back, then that's perfectly fine. You don't need to list them separately in the sitemap file. "Is it OK to have AdSense above the fold?" I can't speak for AdSense policies. From a search point of view, that's perfectly fine. Lots of websites use ads on their pages to monetize the site. That's not something that we would see as being problematic. The main thing that we would see as being problematic is if the above-the-fold content is only ads. Then that's something where when a user clicks on a search result and lands on that page, then that's probably not a best user experience. But, otherwise, there are lots of websites that use AdSense above the fold or any other kind of monetization. And they still provide a great user experience, and they are perfectly fine in search. "We redesigned our site to optimize without keyword stuffing, got removed from spammy directories, et cetera, and we now rank on the first page of Bing and Yahoo, but not on Google. Any tips on why we can move up in other search engines but not Google?" I guess the most obvious answer to this kind of question is that, well, other search engines have different algorithms. They use completely different ways to handle rankings. So it would be normal to see different rankings in individual search engines. And maybe they get some things right that we get wrong. Maybe we get some things right that they get wrong and really it's essentially a different way of ranking the search results. So it will be normal to see those kind of differences there between different search engines. With regards to cleaning up a site, making it better, removing webspam issues, all of these things are things that definitely make sense that we do try to take into account. But it's not something that, in general, you would see a direct effect where you clean something up here and a day later then your site bumps up in the search results. That's generally not how it happens. It does take a certain amount of time for everything to get reprocessed, and that can be several months, maybe depending on what kind of issues that you cleaned up, how long those issues were in place, and how long it takes for us to technically recognize that these issues are actually all cleaned up. "Has the latest Panda fully rolled out?" I don't know. I tried to check before the Hangout, but I didn't receive an answer there yet. I don't know. So I don't have any full answer for you there. "One of our site index counts and Search Console and search results are higher than our own crawler counts. The difference is over 15,000 URLs. What might be happening there?" That can definitely happen. It's really hard to say what specifically might be happening there. So on the one hand, I don't know how you count pages on your site with your crawler. On the other hand, it's hard to say what specifically you're looking at there. So a site query is probably not something I would use for any kind of a diagnostic information if you're looking at the about count there. That's something where we optimize that count more for speed than for accuracy. It's not something where the average user when they do a query for cars wants to know exactly how many pages have this word on them on the whole internet, but rather we kind of bring a bit more of a general information in there. So that's on the one hand I wouldn't use a site query for that. In Search Console, the index stats information is a great way to get an idea of how many pages we do actually have indexed from your site. The sitemaps index count is another way to get that information based on the URLs that you actually care about. Some things that might happen to blow up this index count could be things like session IDs. It could be things like URL parameters that lead to multiple variations of the page. For example, if you have a product listing page that has different options where you could say, well, sort by shoes or sort by size, sort by color-- select a filter for these different options. Then that can easily lead to really tons of pages being found and indexed separately because they do have separate content. And we might show that in the site index count in Search Console as well because we found these pages we're indexing, and we're letting you know about them. That doesn't mean that these are pages that you care about-- that you need to kind of worry about them. So that's where the sitemaps index count comes in where you can say these are the 100,000 pages I really care about. These are all the products in my shop, the category pages, my new blog post, whatever I for my website. And I want to know how many of these pages are actually indexed. So that's what you could pick up in the sitemaps index count. "Does it make sense to take advantage of rel canonical or redirect 301 from post 1 to post 2 in order to take advantage of post 1's rank, even if, in this case, this is not 100% duplicate, but an updated content?" On the one hand, if you're updating content, you don't need to change URLs at all. So maybe you don't need to redirect at all. That is probably the cleanest approach there if you're just tweaking things, improving things, than I would take that extra complexity of separate URLs and redirects out of equation and just say, well, I updated the content. It's the same URL as before, and we'll pick that up and rank that appropriately. If you're updating the content and changing the URL and using rel canonical, then a 301 redirect is definitely a good idea. "My client's website is outranked by the new Local Pack results for brand terms. This also happens when I search for the brand name without specifying any location. Is there a way to rank above the results within the Local Pack?" So I don't know specifically how the local search results there are compiled, so that's something where I don't really have that great of advice for you. But, yes, in general, you can rank better. So it's not about that there's a kind of a block there where we'd say, well, this specific type of search result-- only this type of site can rank. But back in the day before the Hangouts with Johnston Simon, he was mentioning that was a beta test ranking above. But now I don't see anything that's ranked above the new 3 Pack. So he has-- I don't know. I haven't specifically looked for that combination there. But a lot of times when I search for something that could be local, I'll see that local block and some websites above it and some website below it. I don't know. At least I didn't notice recently that something like that changed. But in general, these are combinations put together based on the query, based on what we think the user might find relevant at the moment, and we try to bring the right combination of different search results elements together to help answer that. And it might be tricky here when the question is about the brand name. It probably depends on the brand name as well. If your brand is called, I don't know, SEO Toronto, then it doesn't automatically mean that you'll rank number one for SEO Toronto all the time because that's something people might search for other things for too. So just because it happened to the name you registered as a brand doesn't mean that it'll always rank for that. OK. Maybe Baruch will rank for SEO Toronto first. I don't want to rank for SEO Toronto. OK. "We're seeing more and more Knowledge Graphs, Answer Boxes, and Google Search features on page 1 in the search results. I understand Google is looking for what's best for users. However, from SEO purposes, websites are losing real estate. Are these graphs part of search?" Well, yes, they're part of our search results. But like I mentioned before, we do try to figure out what elements to show to a user based on the query, based on the additional context we might have from this user, and sometimes that includes things like Knowledge Graph information, OneBoxes in the search results. Sometimes we don't show them all the time. So it's something where I suspect over time we'll try to find a balance there. And if you notice that for specific queries we're showing results that don't really help the user, that are bad search results, that are showing information that's maybe even wrong or tangential to what the user is actually looking for, then that's good feedback to have. On the one hand, you can use the feedback link in the search results on the bottom. I know that's something that the team's here go through. On the other hand, if you find a general pattern where these types of results are always bad because Google shows this stupid element on top, which is not something that users ever want to see, then that's something you can definitely send to me, and I'll chat with the team to see what we can do to improve that or to double check that this is actually working as intended because what we do sometimes see as well is that people will come to us and say, oh, this is really wrong. Actually, you should be showing this other website instead of this block here. And this other website happens to be my website or a client's website, and just because there's this connection that you think this is the best search results, doesn't mean it's the best search result for everyone. But we do take this feedback really seriously, and we do try to find the right balance there to match what we show in search with what the user is actually looking for and to try to find the balance there. And we do recognize that a lot of this content does come from websites, and we try to guide people to those websites as well where we think that it definitely makes sense. So, for example, what we'll sometimes do when we show this type of answer on top in the search results is we'll have a direct link to the search results or we'll even have a text find out more or read more about this on this page to kind of really guide the user to that specific page where there's actually more detailed information that helps the user to answer a question. All right, it looks like we just have a few minutes left. So I'll open it up for questions from you guys. Yeah, I wanted to know how often title tags change in SERPs. How often what? How often do you guys update the title tags. The title tags? I don't know. It's possible that it's not for every crawl. But we do try to pick that up as quickly as possible. Yeah, John-- Go ahead. John, actually recently as you told us what was happening that my company CEO had the same name that some, you can say, fugitive had, and I was in Google Search typing my company's CEO name, it was picking up that fugitive information from a Wikipedia article. So I had no option in my mind. So what I did-- I just went to Wikipedia and mentioned there that Google should not pick up this information if somebody searching for my company brand name with this person name. I am not sure if it was really helping and if you suggest what should be done in case it is picking up some other person's information on the same name-- how to deal with it? It probably makes sense to send that to us somehow. So you can send that to me directly through Google+, for example, and I can take a look at that. But putting a text on the Wikipedia article isn't something that our crawlers are going to pick up and try to understand. OK, I will send you. OK, great. John, I got a question as well regarding the redirects. If the 302 redirects can have a permanent character after a while, we usually use them to send users from one website to another without passing value from its backlinks. So are we at risk at getting the value from those back links even if we think a 302 redirect would not pass it. Yes, a 302 redirect will pass page rank as well. So using that as a way to block the page rank from passing doesn't work. What you can do is robot out that kind of redirect part of your website so that we can't actually follow that redirect at all. Then that link kind of gets stuck there. I know some forums do that, for example, that they'll have a jump script that takes them from one page to another or different website. And by blocking new robots.txt, then page rank doesn't pass at all. We usually do that anyway, but I just wanted to be sure. Thank you. John, do you have time to address our current issue again and let me know if the new site is having any issues? Because we're five months down the line from moving, and we're still showing 100,000 links from the previous one still is why they are being numb. And the new one is now dropping like a stone as well. I probably need to take a look at that separately. Let me jot it down somewhere. Or can you post the URL in the chat? Then I can copy it from there. Sure. It's the one with the E. I got it in there now-- listed chat. Right. There we go. We spoke about the 301s that we had on it. It's five months since they've been live and every single one of them is still showing from the old domain to the new. Somehow it's been picked up as intermediate links. And you said before that's not affecting the ranking of the new site. But given the issues-- I'll take a quick look. I don't know what specifically should be happening there. But you're trying to separate these sites completely, right? Yeah, Yeah, OK. If you still have time for a question. I have a corporate client that I talked about last time. The issue is that-- let's say they have a very important page book, volume 1. And volume 2 appears, and if you search for it in Google, volume 2, Google still shows the volume 1 page instead of volume 2. The volume 1 has a lot of good links. It's ranking pretty well. But we want to show the volume 2 where people are searching for volume 2, and the volume 2 page is new, doesn't have any links yet. Is there any way we can tell Google this is more relevant? Not easily. Not easily. The links from volume 1 to volume 2 telling the users that we've just launched the book [INAUDIBLE] front page. Yeah, all of these things help us. But if this page has collected value over a long time, then we'll still think this is not a critical patient to show. So I don't this is like the latest iPhone and everyone has been linking to it for a year now or two years or whatever, and you come out with a new model on a new URL, then it'll be hard for us understand that someone searching for iPhone is actually looking for that new page where we don't really have a lot of information about. So that's tricky in the sense that what does sometimes work is if you have a general page that you reuse where you say this is, I don't know, this revision of this item. And from there, you link to an archive of the older versions of that item, then we can focus on that general page and say this is the long-term page that we do show in search for that specific item. Sometimes that's not easily possible like book series that are actually separate items-- those kind of things. And, it's really kind of giving us all of these signals like you said, the link from the home page, maybe a link from the whole product to the new product, a banner for the user. Those kind of things help. But there's no secret metatag that you can add there to say, well, push this page instead, but don't completely de-index my old page. Right, so just accumulate as many signals as possible. Yeah. John, did you get my recent emails I sent? Probably, I think so, yes. All right, with that, let's take a break here. There are people waiting for the room. So I have to jump on out. It's been great talking with you all again-- lots of good questions. Thanks for all of that. And if you could send me the individual details that will be useful, I can't promise to answer on every email. But we do take them into account. We do pass them onto the team to double check to see what's happened with them. So thanks again and hope to see you again in one of the future Hangouts. Bye everyone. Bye, John, Thanks. ok welcome everyone to today's Google over the central US Airways hello my name is john mueller III know what message and a list here at Google in Switzerland and part of what we do is talk with magnus tourism websites like hang out and I'll try to answer questions come up so without AC we have some new faces some newer faces in the Hangout do any of you people have any questions yes this is my first time so I must be two basic questions here but coming up next and other previous Donegal tags I'm actually facing an issue at the moment and that's related to paginated else so what happened was we are sad errr developers to add friend next and previous to our president pages unfortunately what happened boys the added illegals next to the last such as also pages as well so if Google is now crawling URLs which don't even actually having to search results pages and actually got indexed at the moment so I came up with is actually have added to your answer that and not so that's why we index up nearly what he would do there as return a 404 pages like that if you open it based on the European Tour see you know there's no they're just return a 404 so that they can clearly be true another option he can't return 400 index continued ideally something you need to search results will reach those pages after a couple of months and if they still exist just you don't need to do it and i was thinkin like if the last page has no scratches or if that's the last page and then added parameters lets a chronic 10 and then blocked that even if accidentally users type page number equal to 999 and if it doesn't even exist it still doesn't get across is that a satisfactory something should I try to just return a 404 or no the robot explorer there that something if we don't know if it exists so we wouldn't know that we should drop it whereas if we see a clear and that's very direct sign thank you so much that have been getting 404 is forty years and trying them here is where it all began we are very patient we usually what I think that frequently I don't know a million pages and are not being called something like that sometimes plays into that isn't we see it maybe we should call again and even without it will just missing anything sometimes we notice that these pages come back and it's good that we can and we had a domain-independent I took a quick look at that and from my point of view that it's not something you probably already have content on your PC so we have an Expedia so you definitely ok great questions from new folks anything specific he will start with the questions that were submitted and I just have a quick question are you ok Knicks behind that could cause to believe you don't want people manipulating you something but I do say so societies knew that has a harder time getting normally I just wanted to get rid of behind the North London because they want the most organic resolves the most telling them kind of it is really useful to understand what kind of background there is so for example if something is an advertisement and that's because then we shouldn't be or if it's user generated content for me it's really then having an OK let us know that this isn't something that the webmaster can always stand behind so that I think that really helped us and with regard to new signs it's also important to keep in mind that we don't just look at least it's not that we expect all sides we do try to understand how and try to finish it doesn't some cases even if it doesn't have anything so so that's something we're not require that you go out to all of your news we do take it to a larger and on that note how would you think about no follow links if you had to website I wouldn't you think about it and in order to you know is there any of the other website because you're not online the word that website with an overall doesn't doesn't mean that you don't just now passing so from that point of view site has a lot of things that doesn't mean it's about time just as a lot of no links found it's not that there is any negative signal that something to kind of keep him right there in general these sites belonging to follow the link between them is talking talking about hundreds of sites obviously that doesn't scale that's not something that you're talking about two signs I don't see those looking between each other and they belong to the same person maybe they belong together that alright let's run through some of the questions that were submitted and will open up for discussions again for the mars SEO correlation shows relation to each other trucking company so we don't use analytics when it comes to crawling indexing and ranking so any correlation that you would see their fees are sometimes its technologies for example are you but just because those that are popular doesn't mean that they arranged well thats might just be that Google Analytics something that works really well websites and works well for these websites and the ranking while in search and that's that's great for them but it's not the case I would write better in search so it's important to always take this into account when you're looking at ranking factor analysis reports sometimes they might look like they belong together but it's not because they're really well and that doesn't mean that Google Analytics is something or something that works for your website doesn't work for other websites that something that's definitely worth looking out in general we don't treat Google product in any way differently and sands afterwards that doesn't mean you have a disadvantage compared to someone who uses some other track other now that would be more and sometimes we see kind of these conspiracy theories about more way to advertisers or less way to advertisers and really not something when it comes to search were really really hard on being as neutral as possible and make sure that anything that doesn't have any negative when it comes to if we have a product only for smartphones and so you can have a mobile site that's just perfectly fine that's not something where you need to have a test operation for any website you can have a website that is only mobile it'll still work on display will look a little bit different website and that's perfectly fine then search just like anything I think what I tried to make sure there is that it still works on desktop that it doesn't show up when I just because what generally happens we'll just include results and will also presented so just make sure that way that works best for you definitely don't need a new addition to give the same importance to the content which is for example loading the lights green down treaties if we have the full content on a desktop page and use that also for ranking them if it's only a mobile only appears when then chances are we won't be able to see the main difficulty there is really know what to do all of the content if thats with regards and loads or with regards to be in a certain direction from a certain place all of these things are really hard for us to know what Google so that's something where I can do all kind of see how far will see enough of the content or is there something this is what we sometimes see inference keeps calling you see more and more additional content anyway then it's not that there's anything I just texting so that's kind of a situation where you have to think about what content so maybe loading allowing remain with access problems is that normal and know that shouldn't be the case so the robots is one that should give you feedback also gives you immediate feedback and embedded content so if there's something with India robots that still ok then that will be slide my I don't know your website i dont no actual case here but my suspicion is pretty and you're taking a URL that allowed one way but this kind of allowing overridden and that happened with longer robots with more complicated OS X files so that's something you might want to double check with a friend who also has a double checking our way to get their feedback and really try to figure out which line is blocking the road I noticed that they added in walls and yes comparing that can happen as well sometimes there is like and allows they allow districts that CSS to disallow statement is not longer and CSS files that are located that longer will be blocked by something like that because the robots is that the longer this will allow statement so the more specific rule rights and it's hard to figure out so having someone house there are high on it makes it a lot easier to see ya totally missed that the first you're out and about 30 miles are not listed on the site map so what might have happened here those URLs return for a whore and that's probably went through a whole bunch of URLs short time longer time and these are probably still areas that are shown and redirect wasn't set up already so that's something we're if at the moment you set up properly and was something that was that I just wouldn't really worry about that he wanted so the next time we recall those URLs will see the one redirect and we want to do social impact on organic rankings in Google so it's not a ranking affect their to a large part social networks also have no follow links that they provide when they post is not the case that would give you what you do something see how they can be content and they can rent for your keywords it rained so they can show the social which in turn gives you a little bit more presence provides another Twitter and Google+ is that when we recognize that there's social networks we try to show that research has also shown at the moment so that's something that not that there's just more content with your company name of your brand or product choose a show that searches we're working on accounting policy to improve the length of the catching rule so the caching also not the case using caching are not using caching what might happen refresh your page so if we know for six months for example one day so that's something that I have an effect there but it's not the case that I helped recruit the site so that's something that's something that will affect users a lot more search engines so that I have a lot of indirect and more on your side I was able to do then I and indirect and that will recommend your site so we could there some kind of operating system that Google engineer and change to what you mean they're worth the question so what's so I guess it's not really operating system in that regard but it's not the case that we would because we do try to make our algorithm such that they work that's that's the most efficient way to work and I we have 10 locations websites that makes it a lot easier do you have for example you understand what's happening in search results we can look into those searches all this went wrong here and the specific locations this general or maybe two cases better that's more from you such that we would push changes when it comes to changes that would make their age website rather that I really recommend their which goes through that said we take engineer and let me just show the search results Monday that's that's not the case we have a lot of reviews that's works website including video from one of the whole group of people from different changes so does the pros and cons sample results so that's something where there's this really long process making something change in the client to just a joke is what I do now in case like this you can just remove those and so you don't need to go to Settings titles all these pages are not able to the main domain so I think there to here on the one hand if you have a lot of pages homepage then are you having said that we see a lot of vehicles but they hope the homepage and at a pattern that we often see with interment so what might happen like this is that the webmasters trying to shoot themselves in the foot and we can help them by providing with so we'll try to do that some of those and other hand I'm really based on what we do so just because it has a rel canonical or maybe it doesn't mean that it won't be shown their attacks so I kind of shape report look at those URL specifically double check that these are really and I recommend that they are just and this isn't something that will analyze your website so it's not that your website devoted because of 88 there it's really just a little bit more information make it even better website and essentially a little bit better because more content and trying to figure out best practices that serves all of europe's richest differently to claim the subdomain once not doing target and non-target paid primarily think about whether or not you actually have individual country and if you do have content that individual countries is really useful for these individual country up separates which could be told there were reports of domains which you can target and you can use a targeting is a bit of a stronger signal does it really tells us and we can is more that you're telling us there's a permanent contents depending on the location of language settings and we use that to show so they do targeting helps those individual countries and a trick play helps to make sure that the right version shown when we do show up if you like that geo-targeting you can also use it that Dec or some other country-specific top-level domain than that with the child to that country's from the start so that wouldn't be good job I just clarify that with a nice you have and the dot-com domain if if you use if you do use a sub folder instead of us can you not just set this up subfolder up as a separate property within the tools and geo target out whether or not you can only do that if its own engineering so if you have done you can set up a subfolder forked out / you as and say that's because we know but if you have a dot com sure that you can do the dog home you can solve a total different and you can Gio target that subdomain ok from our point of view subdomain directory works equally well sometimes there but now the results contain a watch trailers for movies and trailers for the movie you know I don't know the location you can send us their two shows are on a page like our page discover sometimes so what sometimes happens is the link will be visible with HTML Javascript 500 there's a URL here we are trying to double check what's so if we can see the euro and the source code HTML someone but on the other hand if JavaScript file is only loaded or third-party merchant reviews and these are you should we also need those having reviews directly I don't know about posting them to Twitter and Facebook that seems maybe a little bit tricky with regard to how do you engage with this generation that would always make sense generated using for your website so like I said before we don't take social drinking so it's not there at the same time I shy away from just taking in your name there people there that were interested in your not something that I would say something if you're using a TripAdvisor to pull through reviews on to your site do do you know where it's at my house so you're just doing it for the users not Google or do you see as well as additional content as well so it can help when it comes to normal as a part of a page and use that for arranging is something that's this is one of these reviews then we all recognize also I'm also on the TripAdvisor page and then we'll try to work out which early stages hotel that could be the TripAdvisor aged person searching their specific so from that point of view that something we were kind of treat as text on the page when it comes to review the structure data market to reach their beds that's always a little bit tricky topic because on the one hand we'd like to have one kind or another chance to have like 18 cater to view your hotel if you have reviews on Tripadvisor and so that's something which i think is a bit tricky topic with regards to the structure data with regard to just text on a page that's something that makes sense that we would treat that aspect that we devoted website because it has i can text you are you rich man not to do that website and mark them up number about your guidelines specifically mentioned that structured at a markup should really be for content that's right originated from your website but I don't know that has changed I know they were in some changes there because it is a kind of order situation sometimes it's just a collection of someone that's going to be so I don't know what the current status is coming up if you have an e-commerce website and I don't think it's a good idea and the GB example which makes sense I think in a situation like that always have to consider it makes sense just having one product for all of these variants on one else because I splitting it up here and other hand maybe that's something that taking a step back and look at it really make sense later this product if if it is exactly the same product that's up to you whether or not you want to use and show those as well for example I sometimes what is it websites websites are used in english and german site the germans tells you that was posted on the English English version of the book also is used so that's kind of sometimes it makes sense to reuse some of those are sometimes they're just so ya so there is recognize this block of text is duplicated and they are you need pages and we all know that ok this page is your thoughts of content that you need is also a block of text here that's so what happen is if someone is searching specifically for something that's just will recognize same across multiple so it doesn't make sense to show all of these sites so we try to pick one of these nights show which is most and children are the ones that's not that there is anyway but rather will try to show that so for example if you have a website and they all have to say it's not that we we understand there is a lack of texts so we all try to match which everyone sees no questions honestly and we try to block the law against and keep so but what we have and it's generally happens areas we see that the pages are exactly and we'll just keep one of these pages but will know about it so that's something we're canonical help doesn't want it also leads within a web site redirects within the website helps all of these things give us which version which texting from my point of view it's not a critical problem but we are falling ice is limited so I tried to fix by being really consistent critical assault case against urs understand because all may use any reviews are in attacks by extortion yeah I usually happens like that so we tried to process JavaScript the content that's bull with Ajax or other technology they're not blocked and we'll use our index in the page but the cash portion was still there HDMI out the cash pages which are based on certain content that I don't know yeah let me run through a bunch of the questions and then we'll open it up for questions questions and local English first but much lower rank so playing essentially just tells us to his ranking it doesn't affect so changes also wouldn't promotion with regards to this search is also looking at the same matches against index project going to indexing only car and as far as I know that he's working on that I don't have any updates to share their pages on our site that never shown searches all time searches of 30 results these pages are really hard to say what that might be looking at the farm and other people to look at that helps arms it's really hard to say what might be happening there shortly direct access to check if the site is operating system that you use to access so we do have various tools to diagnose what's happening there are working as opposed to be so we do need to have told otherwise not really know what is the link rel canonical a suggestion or directed to Google gesture and we do try to take into account when we get married will use it as kind of a definitive answer so if for example your whole site links to one version as a row then we don't really know what you really want to have one had two sites is this another candidate canonical says so as much as possible be consistent within your web site that's always great advice or chrome the search result please advise so Tuesday I believe that redirects to the local version whenever possible there's a way to try to block that from happening by removes the country redirect for a period of time which can be tested in general that's the web site generates tens of thousands of pages then search I recommend using these new pages also make sure that your server is that we can and make sure that interesting enough that goes for stuff that doesn't get bored and search results question I probably need some more contacts started with a little bit more information to help you personally identifying information added Google Analytics I really know much what's in the Terms of Service there or what you can do for more information should I go to the side if you're happy with the way things are being index just if there's something very special want to kind of control then you can give us that things are working out just to its ok questions through these people will still have time for other currency variations of search results which rotate day-by-day not specifically do a lot of experience so chances are you're doing and globally all of the search results and because of that you might see changes from day to day changes your search results sorry is there a generational number what's a good we did a blog post education last year not sure I double check that was even there because so much useful tools and remains that should not have been this about with those domains removed yes just admitted in final with only the Euro 2012 have to say about and also filed will be if the link shows up on the afternoon of age there's Wow made it through ok more questions from you guys job of the page and actually going to become one do you recommend if you are bored no not because the beginning but under the part I wouldn't worry so we we try to recognize them boiler plate when the same content is included over and over across the web site which is often like me who you are and we try to treat that approach so it's not something that you need to officially move around pages where where would you look at the page when you look at the car should be changed the beach not yeah yeah take a look at your own but usually the main content with JavaScript that you wouldn't see that in the Kashmir region I just question regarding because they know or don't know how I don't know I've never seen anything like it sounds like you've already got issues but I think I can just like making plans telling you this is but I go through those things I mention this to the Savior out there is probably something I mean these are all like small things that are worth really hard to just take a look at our web-site this particular part of crawling may be worth checking in with just one may be launched this morning I just don't see anything that looks like what if you think I'm bad to text on page down so that's why we should do you know anything I did the factions and they always so I don't know why I think that that you linked to below the first time that's not something that everyone is looking at this stage of you look at that age so that should be fine yeah I mean what I would do in a case like this is Google with the record kind of double check that the page and that's a good question German French language to set up properly and you're in germany and you search for us right now because the French version of this kind of a borderline case and what happens there is a train doesn't blindly and if we see that age do you figure out what they usually really was well I mean what we can look at the queried settings in the browser is a German were searching for example I sometimes have that and something in English and it wouldn't make sense to pop out the German versions because I happen to be just general john just wondering JavaScript access to and secondly is it possible to pull on more than one dimensions and many you can do that search analytics Apr I don't know if it's typically like a job they actually possible normal Google JavaScript API framework that you can access that but with this country and then you'll get all of that information case to figure out exactly what you're looking for jobs documentation there so I remember that I that some tools are already out there that are great thanks and is it also possible DePaul more than it has a limit of what is at five thousand rows at the moment but there are things that he did do to kind of get more information looking at a month later that week and so that you can get five thousand rows that as more than 100 there are not too many possibilities and I mean I don't know actually coming only depends on the feedback we get at least not with that let's take a break here thank you all for joining thanks for all of the questions and I'll see you in one of the future ones and tell them a great weekend everyone OK. Welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller, and I am a webmaster trends analyst here at Google in Switzerland. And part of what I do is talk with webmasters, publishers, people like the folks here in the Hangout, as well, who are working on websites. As always, we have room for some questions from some newer folks to start off with. Are there any new folks in the Hangout that would like to ask a question? John, yeah, I have a question, if I could go ahead. Sure. So our company has been trying really hard to get some different parameters on index-- so things like searching, and sorting, and you know, like page [? due ?] within our pagination. We have everything set up as correctly as we possibly can, from what we can tell. Pagination is running fine. We go into Webmaster Tools and set our parameters correctly. Canonicals are all set up right. But nothing really seems to be dropping out of the index. In fact, some things like pagination, where we had a bug that caused a bunch of our pages, deeper in the series, to get indexed, we saw that drop pretty drastically a few days after we fixed the bug. But now it just tends to be going back up and down. So we're really not sure why, first of all, our parameters, like search and sort, aren't dropping out. And then also, why the pagination isn't being picked up. We've used robots, as well, to block crawl and to no index those pages. And just nothing seems to be working at the rate that we would have expected. OK. I guess there are few things that kind of come together there. I don't know which parts are relevant in your specific situation. So the first one is, if these parameters are blocked by robots.txt, then we won't be able to see any no index or canonicals on those pages. So the robots.txt is something I wouldn't recommend doing there unless you have a really, really critical problem that we're crawling your server so much that it's crashing or something like that. So I try to avoid using robots.txt for solving duplicate content problems like that. The other thing is with the rel canonical, when you're looking at that, we first have to kind of crawl and index the page that has the rel canonical on it before we can forward everything to the new pages. So if you're specifically looking for those pages in search, then chances are you'll still find them. But if you search for something like some of the text on the page, then you probably won't find those in search. OK. So would you say the best thing to do would be to temporarily open up our robots, so that those can be crawled? And then just close them back up once we start to see the results? I'd leave it open. I'd let it continue to be crawled, so that the canonical can be found. If you have a no index on these pages, that can be found as well. So essentially, we can continue looking at your site normally. With the robots.txt in place, if that's blocking those parameters, we'll see links to those pages from the other pages of your website. And we'll try to follow them. If they're blocked by robots.txt, then we'll index those URLs without any content. So if you do a specific site query, something very specific to look for those URLs, you'll see the URL is indexed, but it says, blocked by robots.txt. OK. All right. Thank you. All right. Hey, John. I have one question. Sure. We recently launched a website wherein we kept all the content in JavaScript. And with the new changes from Google, we saw that Google started crawling all the content, what we have on the website. Not in the full version, but in the text version, I can see all the content is crawled and indexed in Google. We wanted to check, like, what is the value of this content compared to the plain HTML content, what we have on the website? And the second question is, if you do any interlinking from this content, does it carry any value? So if your website is built with JavaScript, like with a JavaScript framework that pulls in content from other sources, then we'll try to render those pages, kind of like a normal browser will do that. And we'll take the content that we find from rendering and use that for indexing. So if there are links in there, we'll follow those links. We'll keep them as normal links. If there's content in there, we'll treat that like normal content. It's not that we would kind of devaluate anything just because it's generated with JavaScript. But the important part is that we can crawl all of these JavaScript resources. So if you have JavaScript files or if you have Ajax responses, those kind of things, make sure that those are not blocked by the robots.txt, so that we can actually pick up everything and crawl it normally. OK. Let's start through the questions that were submitted, then. Looks like it has a weird order. Let me see if I can flip back and forth and get those kind of sorted by votes. I guess not. OK. Is there any SEO benefit to having an RSS feed? So this is, I think, a good question. In general, there are two aspects involved there. One is if you're looking for a ranking boost by having an RSS feed, that's not going to happen. The RSS feed is really something that we see more as a technical help to crawl and index the content a little bit better. So if you have a website that's changing content fairly quickly, if it's a news website, a blog that has a lot of new content, maybe even a shop that has lots of new content, then the RSS feed really helps us to stay on top of things, so that we can pick up all of those new URLs and crawl them as soon as we can. So if we recognize that the RSS feed is working fine, and we get a ping from the RSS for the RSS feed, then we'll pick up the RSS feed. We'll try to crawl those pages that are updated there maybe within seconds. So that's something that works really, really quickly. You can also, if you have a website that's updating a lot very quickly, you can also use PubSubHubbub to kind of speed things up for the RSS feed, so that you don't need to ping in separately. So those are all things that really help us to pick up new and modified content within a website with a RSS feed. There's no direct ranking boost for the website itself. But of course, if this content can't be indexed, then we can't pick that up and show that in search. So if there's something new on your site and the RSS feed helps us to find it faster, then we'll be able to show it in search faster. But it's not that there's any bonus for the existing content, for everything else, just by having an RSS feed. Hi, John. I'm working with a big automotive publisher in the US, and they're using their news site map. Would that be kind of equivalent, or do you process those independently, or just by the Google News bot? And the RSS feed would still help to kind of discover and index faster the new articles? They put out like about 10 articles per day, so it's really important that those get picked up really fast. They have a news site map, so I was wondering if that's enough to kind of get the content indexed. If they ping that news site map, those 10 times a day, then that's generally enough. We have a blog post on the blog about RSS and site maps, maybe about a year back. That kind of goes into the details of where you might use one or the other format. But if you're looking at something where you have 10 updates a day, then I think either variation would work. If you're looking at something that has thousands of updates a day, then probably an RSS feed can help to kind of focus on those updates specifically. Or you can also use PubSubHubbub to push all of that direct. OK. OK. Would there be a small ranking benefit if we compare the same site once with a lot of 404 and soft 404s present, and other technical problems, such as wrong hreflang, no canonicals, in comparison to the same site in a perfect technical condition? So again, they're the two aspects here. On the one hand, the crawling and indexing part. And on the other hand, the ranking part. When we look at the ranking part-- and we essentially find all of these problems. In general, that's not going to be a problem. Where you might see some effect is with the hreflang markup, because with the hreflang markup, we can the show the right pages in the search results. It's not that those pages would rank better, but you'd have the right pages ranking in the search results. With regards to 404s and soft 404s, those are all technical issues that any site can have. And that's not something that we would count against a website. On the other hand, for crawling and indexing, if you have all of these problems with your website, if you have a complicated URL structure that's really hard to crawl, that's really hard for us to figure out which version of these URLs we should be indexing, there's no canonical, all of that kind of adds up and makes it a lot harder for us to actually crawl and index these pages optimally. So what might happen is we get stuck and crawl a lot of cruft on a website and actually not notice that there's some great new content that we were missing out on. So that's something that could be happening there. And yeah. I guess that's pretty much the comparison. So it's not that we would count technical issues against a site, when it comes to ranking. But rather that these technical issues can cause technical problems that can result in things not being processed optimally. Hey, John, can I jump in real quick? Sure. I unmuted because I'm in a coffee shop, ambient background noise. I'll try to keep it short. I've noticed, especially in the last six months or so, across a number of different verticals, rankings seem really stale and static, like they're not moving around at all regardless of on and off sites activity. We're doing a couple of viral marketing campaigns that we're going to get a ton of links through. We've restructured-- and this is not just one site, the site that I've been pestering you about for the last year and a half. It's a number of them. They're responding well on Yahoo!, Bing. What's going on with Google, basically? It's hard to say what you're seeing there. So in general, we are working on search. And we are pushing updates all the time. So I think last year, we made over 1,000 updates in search. And these are things that you'll always see resulting in fluctuations in ranking, changes in the way that we rank things. Also, of course, we pick up all of the new content, pick up all of the signals that are associated with that, and try to take that into account. So if we see that signals for one site changed significantly, then that should be picked up within a fairly short time, maybe even like minutes, hours, that kind of time frame even. Because it usually-- I mean, I've been doing this stuff for, like, 12 years. And in the past, usually within a couple of weeks after you start implementing some changes, whether it's on or off site, you're going to see some movement. The needle's going to move one way or the other. Lately, at least for all of the sites that I'm monitoring, even competitor sites that we're just monitoring, actively monitoring, nothing's moving. And I don't know if it's just the verticals that we're in, or if certain images are tougher to change. Or I don't know if you can elaborate on that. I don't know. It's an interesting question. But I don't really have anything specific where I'd say, oh, we, I don't know, all went on summer vacation for the last couple of months, and nothing is happening in search. People are working here, and they're pushing changes. We have our algorithms that are picking these things up automatically. So it's not that there's any kind of an artificial freeze on the search results. Or we'd say, for this niche, or for this kind of website, or even for search in general, we're freezing things and not changing things at the moment. Yeah, I mean, things should always be changing. But of course, if you make a lot of changes on your website and you don't see any results, then maybe some of those changes aren't really relevant for that site's ranking at the moment. That might be one aspect there. Thank you. All right. When blocking dynamic URLs with robots.txt, does it make sense to also implement a meta robots tag? I think we looked at this briefly before. But essentially, if a page is blocked by robots.txt, then we can't pick up any of the meta tags on the page. So we can't tell that there's a no follow on there, if there's a no index on there. We don't see any of that when the page is blocked by robots.txt. So if those meta tags are really important for you, in that you want something indexed, or you don't want something indexed, or you want the canonical to be picked up, you want to kind of combine different pages into one single page so that all the links get concentrated in one place, then make sure that we can actually crawl those pages, so that we can process them for indexing. Even if that means that these pages have a no index on them and you don't want them indexed, if we can't crawl them, we can't tell that there's a no index on there. To get rid of doorway pages, we're thinking of doing 301s. Is that better than doing a follow no index? As these pages get traffic, we're trying to tidy up our site, but don't want to get punished or have a negative impact if we tidy up wrong. So essentially, both of these options would work. You could also, theoretically, use a 404 on these pages. So from a technical point of view, these options are all possible. Which one makes more sense for you is essentially left up to you. So if you want to combine everything into one single page, using a 301 is a great way to do that. If you want to keep these pages kind of as ad-landing pages, those kind of things, but you don't want them taken into account for search, then maybe a no index is the right option here. But essentially, these are all different ways of kind of reaching the same goal. And it really depends on your site, your goals, what you want to achieve with that regarding which one of these options you choose. What I have to do to show up my company information from Wikipedia Knowledge Graph sidebar on brand search queries? I've already added it to Wikidata and Freebase. What else is required? Essentially, we pick up this information from various sources, like Wikipedia, like Wikidata, Freebase. And our algorithms then try to figure out whether or not it makes sense to show that information. And sometimes this is not really a technical decision, in the sense that our algorithms might look at this and say, well, it probably doesn't make sense to show something specific from Wikipedia in a sidebar here. Or sometimes they'll say, well, it makes sense to show something, but we don't really have that much information, so we can't show anything. So I think by going through and making sure that the base information is available, you're doing the right thing there. And the rest is not really something that depends on technical issues, but rather on how we think that this content matches what a user might be looking for. As a multi-location business, we use the same toll-free number for all our branches. I've read we should use local numbers for each local branch. Does not using a local number impact local rankings? I think this question is more about Google My Business local search, and I don't really have an answer for that. So for that, I'd probably check in with the Maps forum or with the Google Local Business forums and ask there. It's possible that this is used for the Google Local Business entries, but I really don't have an answer that I'd feel comfortable sharing. When using parameters in Google Search Console, is this only a guide for Google, or is this definite? Is it better to use parameters or use robots.txt if I want to block certain pages from being crawled? So this kind of goes into the question we had in the beginning. In general, using the parameter settings in Search Console is a great way to give us information about which parameters you think should be indexed, or which ones need to be crawled, or need to have a representative crawled. But it's not a complete block. It's not that we would stop crawling those parameters completely. But rather, our algorithms might look through a sample of these and kind of double-check what the webmasters applied there. And if that looks good, then they'll continue using that. But if they notice that there's a significant mismatch between what the webmaster said and what is actually found on the website, then we might choose to kind of not value that webmaster input that much. So that's something kind of to keep in mind there. If you really need these pages to be blocked from crawling, if they're causing a load on your server that you can't handle, or they're causing problems elsewhere within your website, I'd really use a robots.txt file instead. Because that's really a definitive directive, where you're saying, Google, you shouldn't be crawling this part of my website or URLs that look like this. Search Console is great to kind of give us more information about your website. These parameters are important, or these parameters are for sorting or language selection. But it's not something where we'd say, this is a definitive rule for Google, that it should be used like this or not. So if it's causing a problem, use the robots.txt. If it's just a matter of giving us this information so that we can index and crawl your website better, then, of course, Search Console is the place to go. How does Google show in Google Webmaster Tools how a user sees your web page and how a crawler sees it? When I'm blocking some content to the crawler, how Google shows this content on the reference of how users see this content. So I don't actually know for sure if we still do this, but I believe in the Fetch as Google feature, if you select that you want to have the page rendered, we show you the page how Google Bot would see it, based on the robots.txt directives that are in place, and how a user might see that, kind of disregarding those robots.txt directives. So specifically around JavaScript, CSS, images, those kind of things. Because often we find that if you compare those two screenshots next to each other, then you'll notice that, oh, this is really critical content that I'm blocking. Or you'll notice that maybe this is totally uncritical content that's being blocked, and it has no effect at all on how the page is actually rendered by Google. So that kind of difference between those two screenshots helps you to figure out, are you doing the right thing with a robots.txt file, or are you blocking too much? Or are you maybe blocking something that's not really that critical, which wouldn't be like a first priority that I cleaned up? Will the feature Critic Reviews and Knowledge Card be available globally, including countries using double-back character sets? I don't actually know what the plans are there. In general, we do try to start off with something kind of limited to try things out. And then we roll it out globally as much as possible. And differences around character sets are usually less of a problem, because we have a lot of practice reading pages in various encodings and character sets. Sometimes it's more of a problem of us recognizing or understanding the content properly, being able to double-check that it's actually working as expected there. But for the most part, we do launch things sometimes in a limited basis initially. And then we try to expand that when we see that it's working well and use that globally as much as possible. Ideally, all of our algorithms would be global, because that makes it a lot easier to maintain things. We don't have to worry about this specific algorithm that's only used in Japan. But rather, we can focus on one algorithm that's [INAUDIBLE] across the whole world, ideally. Would URL's content as below count as duplicate content? Let's see. There's domain.com/products/item25. And there's domain.com/es/products/item25. So I think the question is around, would translated content be seen as duplicate content? And for us, the answer is a clear no, in that, if something is translated, it's really something different. It's not the same as the other language content. So just because kind of the source of the content is the same, and you have one version that's in English, and one version that's in Italian, one version in Spanish like this, doesn't mean that these pages are equivalent, that they could be swapped out against each other, that they're duplicates. So we'll try to crawl and index these pages separately. We'll try to show them in search results separately. What might happen is, if you have an hreflang link between these pages, where you say, well, this is the equivalent content in Spanish, and this is the equivalent content in Italian, then we might use that to swap out those URLs when someone is searching in those languages. But we wouldn't treat them as duplicates. So we treat these as normal URLs. And especially in the beginning when we just look at the URLs, when we see these URLs alone, kind of like you have them here in the question, we would definitely just try to crawl them separately if we can. We wouldn't be able to recognize from the beginning that, actually, this is kind of a similar URL that maybe we don't really need to crawl. Hey, John, can I ask you a question? Sure. So we have also a big site. I just sent you the URL. And from what we can tell, it doesn't have any content quality issues. I don't know if you can confirm that or check it out. But is it a good idea-- how would you go about migrating a site that has some quality issues to a bigger site that doesn't have? And maybe you can check also the URL and confirm for me that my assumption is correct, and it's not like one plus one equals-- going to be four in the quality arena. But how would you think about that idea? So when you're combining sites? Yeah. So not when you're, like, moving from one domain to another? But rather, you're taking two sites and you're putting them into one site? Correct. OK. And one of them is much bigger than the other. Yeah. I don't think there's one easy rule to kind of handle that. So in general, what you'd try to do is to make sure that all of the old URLs-- so if you have two domains, for example, and you keep one of them, and the other one, you want to kind of fold it into your main domain, then you need to make sure that all the URLs from the domain that you don't want to keep kind of redirect to an appropriate URL on the new domain. And that could be a new page on the new domain. It could be an existing page. But essentially, we'd want to see kind of all of those redirects happening there. But the tricky part here is, of course, folding two sites together is not as easy as, like you said, one plus one is two, in that, we take into account all the signals of the combined site. And that's not something that we can just kind of extrapolate from. So we essentially have to crawl pretty much all of those URLs to be able to fold things together. And then reprocess everything on our side to see where does everything kind of end up in the long run. So with a new site, how does it look in the bigger picture? Got it. But there is not something like-- so you do an overall assessment? There is not anything like, OK, the content quality penalty from one site will move to the other? And all of a sudden, you will have a bigger site get penalized? As you said, you will crawl the whole picture and get the whole picture to make an assessment. Is that right? Yeah. Yeah. That's pretty much what we try to do there. We try to treat it as one website. So it's not that we'd say, oh, well, this piece of content here came from this website here, therefore, to be treated differently than the rest of the content on the website. But rather, we'd look at everything the way that you have it now on your website or in the end. And we'd try to figure out how should we treating this website overall. And how does algorithms like Panda work, that they don't run on a continuous basis in a situation like this? So it's hard to say. Because we do have to kind of update those algorithms from time to time. They do look at the site overall. So if they're low-quality issues across the whole site, then that's something that could be taken into account there. But if they're just low-quality issues on a very small part of the site, then that's generally less of a problem. OK. Is my assessment correct that the URL I gave you seems like to be fine? It's not-- it doesn't have any content quality issues like the other one? I don't know. I haven't had a chance to take a look. OK. But I'll try to take a look afterwards. But in general, it's something where it's really hard to say ahead of time how that will kind of end up, if you combine two sites. If you move from one domain to another, it's easier to say, well, we just transfer all of the signals from this domain to the other one. But when you combine things, it's hard to say. Sometimes one plus one ends up being four, in that we see this as a really, really awesome site combine. Sometimes we'll see that one plus one is maybe one and a half, because it's kind of a mixture of a lot of low-quality content with higher quality content. It's kind of hard to figure out where that should be ending up in the long run. So that's something where I kind of take care to figure out how exactly you want to merge these sites to make sure that you really have something fantastic in the end and not something that's kind of a mixture of, I don't know, in an extreme case, two mediocre sites. I don't think that's the case with your sites. But that's something where you kind of have to think about what are you trying to achieve in the long run. Yeah. No, in the long run, strategically, we want to do that combination. From a branding standpoint and from a focus, it's not a question. But what we don't want to happen is transfer a penalty to a bigger site. So I'm glad to hear that you're going to assess the overall picture and not just say, oh, those pages had the penalty that came over. Now we need the second site to have a site-wide penalty as well. So that's assuring. And thanks for your offer to check out the domain name offline. All right. Let me run through some more of the questions that were submitted. And then we can get back to some more open discussions. For multi-location businesses, some sites have branch pages for each physical location. If you offer the same service at each depot, how can you rank for local terms unless you have location-specific category pages hanging off branch pages? Is that bad? So I guess there are multiple ways that you can look at this. On the one hand, if you have kind of the extreme case where you have almost doorway pages, where you have each location has its own website, then that's something that we would generally see as being kind of spamming, something that I'd recommend you not do. On the other hand, another extreme idea, you just have one home page that's for all of your branches. That might not be perfect either. In general, what you'd have, if you have local businesses, you probably also have a local business entry. And that's something that could be ranking in the search results. So from my point of view, what I'd definitely try to aim for is making sure that you have your local business entry for all of these locations. And if there's something specific that you have to share around these locations, maybe it makes sense to put something on a website. Maybe it makes sense to combine some of these entries together. So if you have branches in all the cities across the US, maybe you'd want to separate that up by state or by type of service that you offer in general, and kind of list the branches that offer the service. I'd just shy away from going down the doorway route, where you're essentially just creating pages, or websites, just for these individual branches, and they don't have any unique value of their own. So that's what I'd avoid. If there is some way that you can provide unique value maybe for a handful of branches, maybe for some of the services that you offer across these branches, then, by all means, put that out on a website. But avoid just creating pages just for kind of matching those queries. John, if I can jump in again one last time. I'm going to try to restructure the question. How important are links and outreach in content marketing in today's Google algorithms? We use both. I don't think this will really help you in that regard. But I think the interesting part of our algorithms is that we do use so many different factors. And we don't apply the same weight for everything. So it's not the case that every website needs as many links as Wikipedia has in order to show up in search. But rather, depending on what people are searching for, we try to balance the search results and provide something relevant for the user. And sometimes that means that we put some weight on links, because we think that they're still a relatively useful way of understanding how a site should sit within the context of the web. And sometimes it might mean that we don't focus on links at all, that we have a lot of good content that we can recognize there without maybe any links at all, with maybe a really small number of links that are actually pointing at that. So it's not the case that there's one fixed weight, where we'd say, well, links are 85%, and on-page content is this much, and this is this much. But rather, we try to balance that based on the search results, based on what we think leads to relevant results for the user. OK. But I mean, essentially, building a quality website with unique value proposition, with some links, good content, social interactions, should produce good results? Generally, yes. But I mean, it all depends on what you're aiming for. If you're trying to-- Number one. Yeah. But if you're trying to be, like, an online bookstore, and you have a nice website, and it has a handful of links, and some people are talking about it on Twitter, then you're not really going to pass everyone else that's been running online bookstores for the last 20 years. So it's really a matter of kind of looking at your site in the context of where you want to be. And some niches are really hard to kind of get into. And it's not a matter of just having a nice website that people think is kind of nice. You really have to have a really strong presence there. And that's not something that's easily done. Sometimes that takes a lot of time. Sometimes that takes a lot of work. Sometimes that takes a lot of money to actually create a website that people think is professional enough to actually trust. I trust your answer. Thank you, John. Thanks. John, regarding that last question with multiple locations of businesses. So for example, if you have one location for each state, let's say, would it be enough to kind of add on that location page maybe directions on how to get to that office, or maybe some information about even the staff, or other contact information to kind of build up the uniqueness of that page? Would that be enough to kind of tell Google, this is not just a page made for ranking or query [INAUDIBLE]? Yeah. I mean, that's all good things that you can add to the page. Opening hours, those are good things to add. Maybe unique information about this location could be useful. Those are all things that add extra value, in that, if someone is searching for that business, and they click on a search result, and they land there, they're not going, why is this showing in search? This doesn't provide any value. So those kind of things definitely make sense. They also help us to kind of recognize where we should be showing this search result-- maybe to extract the opening hours, that we can use in the sidebar, all of those things. Right, right. And I assume structured data markup actually helps you kind of find out? Yeah, yeah. Sure. Cool. Once I change its business and domain name, pages are still showing with the old domain. 301 redirects have been set up. But I must change the search listing so the new name and domain appears. Is change of address in Search Console the best way to go? The change of address tool in Search Console is definitely a good thing to do. We have a whole set of guidelines in our help center about what you could be doing when changing from one domain to another. And I'd kind of go through that and just double-check that you're really doing everything right there. The thing to keep in mind is that even if you have a redirect set up, it's going to take quite some time if you do a specific search for the old domain, for that to really drop out. So for a certain amount of time, we'll probably try to be smart and say, oh, this person is searching for the old domain. And we know about this old domain, so we'll show that to you in the search results. And that's probably not what you're trying to achieve there. So that's something where if you do a site query for the old domain, you'll almost kind of need to be prepared for those numbers to be there for quite a long time-- maybe months, maybe a year, even, before they actually drop out. But the good news there is that if someone is searching for your business name, or for your company, or the type of business that you're doing, then we'll be showing the new domain as much as possible. So essentially, if you're looking for the old URLs, then we'll try to show those to you anyway. So that's not a good metric to focus on. But if you're searching for your business, the business thing, then that's something we'll probably pick up on fairly quickly. John, why are you not on the Open Office Hours drawing? The person has hair, but you don't have any. OK. I need to add some hair, I guess. Yeah, I thought it was a really nice trailer image there. But maybe I need to get one without hair. I don't know. [SIGHS] Always have to be so accurate. Terrible. I am webmaster for an information-based website. When a user fills out a form in a conversation like that, will the algorithm deduce that it's a positive ranking factor? Or as a user spends time reading a blog article, does that increase the authority of my website? So in general, I don't think we even see what people are doing on your website-- if they're filling out forms or not, if they're converting to actually buying something. So if we can't really see that, then that's not something that we'd be able to take into account anyway. So from my point of view, that's not something I'd really treat as a ranking factor. But of course, if people are going to your website, and they're filling out forms, they're signing up for your service, or for a newsletter, then generally, that's a sign that you're doing the right things, that people are going there and finding it interesting enough to actually take a step to leave their information as well. So I'd see that as a positive thing in general. But I wouldn't assume that it's something that Google will pick up as a ranking factor and use to kind of promote your website in search automatically. As Google started indexing the content in JavaScript, will it have the same value as plain HTML content interlinking [? within ?] the JavaScript content? Will it carry the same value? Yes. Like I mentioned before, when we render the pages and we can pick up the content through JavaScript, then that's something we'll treat as normal content. And if there are links in there generated by JavaScript, we'll try to treat that as normal links as well. So that also kind of goes into the area of links that you don't want to have on your website. If there's something like user-generated content, if you have advertisements on your website for other websites, then I'd just make sure that you're using the rel no follow there to make sure that those links don't pass in page rank. So even if they're generated with JavaScript, you can add the no follow attribute there by whatever DOM manipulations, or however you generate those links within your content. And we'll respect that appropriately, as well. Does desktop/mobile server errors impact search ranking? I noticed many 5xx errors within Google Search Console. But mostly, those URLs are no index from source. But why those considered errors and crawled by Google? So with a 500 error, we'll assume that's a server error, and we won't index that page like it is. So if we see a page that returns a server error, we won't take that content into account for indexing. Of course, if the page has a no index on it, then we wouldn't take it into account anyways. So that's not something where you'd see any kind of an impact directly in search with regards to ranking. You might, however, see an impact on how quickly we crawl. So if we notice that a server or a website returns a lot of 500 errors, then we might think that this is because we're crawling it too hard, that we're kind of causing these errors by accessing it so frequently, that we'll back it off from the crawling rate that we use for that website. So from that point of view, if your website used to not have any server errors and suddenly, you see a lot of server errors, you might kind of double-check the crawl rates to make sure that we're actually still crawling as much as we used to crawl. Or maybe double-check the crawl rate to see if we were crawling way too much, and if our kind of backing off on the crawl rate is the right thing to do there. So that's kind of what I'd look at there. John, can I ask you a question? Sure. You have this page [? littering ?] in the SEO market in Chile for about five years. But between the number one [INAUDIBLE] position for five years. A few weeks ago, we have made some redirections for some page talking similar things to this main page. [INAUDIBLE] SEO. So until then, we have a very good position. But then we have go down in the positions. It may be this pages that redirects to this main page. [? Hap ?] may hurt because some penalty? We don't have any penalty as far as I know. But it maybe hurt this redirection of this page that have no content to this main page. Can you hear? These redirections? I can show you the-- Yeah. I probably need the URLs. Maybe you can post them in the chat. But I probably need to take a look at the details. So it's not something where I can just take a quick glance and say, well, this is what's happening there. But in general, as things change on the web, on your website, then that can result in changes in ranking, too. But if you can post the URLs in the comments, then I can pull that out afterwards and take a quick look to see if there's anything specific that I could point out. OK. Thanks, John. HTTPS boost. Websites that have implemented HTTPS, does it matter which SSL certificate it is? For example, if it's a standard certificate or an extended validation certificate. No. From our point of view, we treat those the same. If it's a valid certificate that's set up in a way that's using modern encryption standards, that's not obsolete, then we'll treat that as a valid certificate. And we'll look at that on a per-URL basis and rank that appropriately. So it doesn't really matter, from our point of view, which type of certificate you use. If there is something specific that you want to use that certificate for, then by all means, pick the one that works best for you. But if you're just trying to move everything to HTTPS, there are a lot of options out there. There are a lot of really cheap options, as well. So you don't have to go out and buy a really expensive certificate if that's not something that you explicitly need for anything special. What's the limit on number of participants in the video? 10. So I think that's about it. We fill up very quickly in the beginning. So in general, if you want to join these and you haven't joined them before, then make sure you leave a comment in the event listings, so I can add you earlier. Otherwise, make sure that you're refreshing that event page fairly frequently, so that you pick up that link and can jump in as quickly as possible. I just bought an SSL certificate for my main domain. One of my folders runs WordPress. If I don't use SSL on this folder, would it harm my rankings or create duplicate content? So like I mentioned before, we look at this on a per-URL basis. And if there are some parts of your site that are on HTTPS and some parts that are on HTTP, then we'll take that into account appropriately. And some of those might have that kind of tiny boost for HTTPS, and other parts wouldn't have that. So that's not something I'd see as being a critical problem. On the other hand, kind of having a website that's split, HTTP and HTTPS, makes maintenance a lot trickier, in that you can't easily include HTTP content within an HTTPS website. It'll kind of flag that as a mixed content warning. So as much as possible, I'd kind of aim to moving the whole website to HTTPS, so that you really have one version to maintain. You don't have to worry about the mixed content issues. And it makes things a lot easier in general. I'm an [? old ?] webmaster. Is there some way I can get a walk-through assistance to make site maps? I've read everything I can about it, but I really need to have someone walk me through it. Is there a basic web-based tutorial or interactive for this? I'm not aware of any basic tutorial to kind of handle that or create those [? type ?] of files. But in general, most content management systems-- so if you're using something to blog with or an e-commerce shop, they have either site maps built in already, or they have a plug-in that makes it really easy, where you just activate a plug-in, and then the site maps will start working. So that's kind of what I'd look for there. Depending on which system you're using to publish your content, see if there's something really simple to just activate that sets up site maps for you. And it might be that your content management system is already doing site maps for you behind the scenes, and you don't need to do anything at all. Let's see. Have you ever considered implementing dynamic search results pages? Many websites for competitive queries deserve to be top one. Why not have different sites on top one, top two, et cetera? Yeah, John. Why are you using static search instead of dynamic search? I made that question. So can you elaborate, Cristian? I think that many sites, the search are competitive keywords, I think. Many sites deserves the top one. And many deserve top two and top three. So Google, for the other side, is looking for testing in the search. No? But this linear catalog of the server, that maybe can be for 24 hours a week or even a month in the top one. This linear catalog can be improved with this dynamic search. Why don't you use this search in testing the real number one, the very best number one site? That's my question. So basically, to have the top search result kind of swap out against other ones from time to time? Something like that? No. You can have an array for the top one. So if the American [INAUDIBLE] keyword maybe deserve five sites on top 1, or 10, or 11. I don't know. I think it depends on the competitive for the query. But for very competitive queries, I think that a very big amount of sites deserves the number one. So why don't you use an array for number one, a different array for number twos, a different array for number threes on even very-- the top tens? Different arrays? Different search? Very dynamic search. It's the end of the SEO, I think. I don't think it would be the end of SEO. It would just change everything. But I think there's some really interesting ideas around that with regards to going from just those 10 blue links that we have in the search results to providing something slightly different, which might mean that maybe there are more results on a page, or maybe they're presented in a way that's very different than they are now. I don't have any good answer as to why we don't do this at the moment. But I know the teams are always working on trying out new things. And sometimes they're subtle changes. Sometimes they're really crazy changes. And these are things that are always being tested. So whenever you do a search, chances are you're within maybe 10, or 20, or 30 different experiments, where someone will be testing something with regards to the ranking maybe, with regards to the layout. Sometimes that's just like a pixel here or a slightly different color here. But sometimes those are really crazy experiments, too. So these are things that I know the team is always looking into and trying to find things out that work well. And they do lots of really crazy things in internal tests or with very limited tests. But they're always trying to improve the search results, to make sure that it kind of works really well for users. And while I can't really imagine 10 results being, like, in number one, I could definitely imagine situations where it makes sense to make that a little bit more dynamic. OK. All right. We just have a few minutes left. So I'll just open it up for questions from you all. I'll go ahead with a question regarding our previous talk about an automatic publisher. We have a bit of an issue, so to say. We're not sure if we should implement this. Let me just send you maybe a new URL. The idea is that, as a publisher, we have a lot of articles. So, like, let's say 100,000 of pages with articles. And each page, since it's in the automatic niche, has a lot of photos. And each photo is on a different page. So that would skew the report between, like, pages with just a photo on them and maybe links to other photos, to the pages that actually have text content, and articles, and things like that. So like, I don't know, 75% picture pages where everything is just a photo and a few mixed other photos. So we're worried that Google might kind of think that, well, this website is mostly about photos. And we were thinking that we might-- maybe we should no index those. But then we are worried that Google won't list the photos when people search for image search. It's in this niche. A lot of people go to image search. And we don't know how that would play out. So we don't have a lot of traffic to these exact pages, maybe 1% or 2%. But still, it's 1% or 2%. I don't know. I don't have a direct answer that I can give you to kind of follow as a guide. What I might do in a situation like this is just test these pages and see what happens. Maybe take, I don't know, 1% or 5% of your website, and you implement one variation. Implement another variation on another set of those pages, and just kind of see how things work out. And maybe you'll find one or the other works really well. Or maybe you'll find that both of them work well, and you can pick and choose based on something else. Well, working well is-- we're not sure what to expect. For example, for a period of time-- so that's one of the articles I was talking about. And each of the links to the photos is on a separate page. And for a period of time, all of the photo pages had a rel canonical to the main gallery page. So Google only knew about the article page and its main gallery page. But since the rel canonical was there, maybe that wasn't the best option. Google wouldn't be able to see the photos and crawl the photos. And we removed that, and we basically let the Google Bot go over all of the new pages. And we kind of started dropping, like, 5% per month in traffic ever since. We're kind of worried that this might be because Google kind of thinks the website is about-- not so relevant to these topics, as it was before. Because it sees, like, a million pages with just a photo or a couple of photos on the page versus 100,000 articles, let's say. I think, to the most part, we'd look at that on a per-page basis. So I think this is something you could just try out, and see which variation works better there. But it sounds like a really complicated setup. And it's really hard to kind of make a mental model of what exactly you're doing there and what the implications would be. But this seems like something where I'd try to test that as much as possible. And see if there really is a connection, like you suspect, or maybe if something completely different is playing a role there. Maybe, I don't know, users are happier with one version of these pages or not. And search engines don't really care, either way. Well, I know there are some Google algorithms that look at the website as a whole, not just page by page. So this is why I'm worried that having just 80% of the website pages with just a photon on them might affect the other 20% quality-wise, let's say. I don't know. It's really hard to say kind of offhand. Yeah. But for example, if I no index those pages, they're still followed. Google would still pick up the photos. Would they still show up in image search if the page on which the photo is is no index? Probably not. But I have to go now. Someone else needs this room. [INAUDIBLE]. It's been great having you guys here again. Thanks a lot for all of the questions. I'll try to think if I can find a better answer for you, Mihai. But I need to go through that in my head, the different options there. OK. Maybe I'll see some of you again in one of the future Hangouts, maybe later this week, maybe in two weeks again. Until then, wish you guys a good time. Bye. Bye, John. OK. Welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland, and part of what I do is talk with webmasters like the ones here in the Hangout, or the ones who submitted lots of questions or on the help forums, and try to get the information to you that you need to make great websites that work well in search. We have a handful of people here today, so if you're watching this live, feel free to jump on in. There's still room. And, as always, do any of you have any questions that we should start off with? So, John, I have a question. I was wondering whether you could look up my website and tell us if there is something holding back the site in terms of whether it is content, quality, or links? I know you've done that in the past in the English chats, and that would be extremely helpful for us. I can take a quick look, but it's probably not something I can give you, like, a short, 5-second answer on. So, yeah. I know you sent me an email as well. I need to take a look at the details there, too. But-- The only thing is that I think it's been 10, 11 months since we believe that we are in the Panda penalty, and we would just like, at a minimum, a confirmation whether that's the case or not. I think if that's the case, quite honestly we would love an example of something that as a human, you would find the Panda worthy of a penalty-- for a sitewide penalty. If it's not, that would be also very helpful, so we don't think that it's something that we cannot anymore fix. There were problems with the site a few years back and we took a lot of corrective action. So we are also concerned that maybe, to a certain extent, Google cannot tell that the past few years have been very different than before. And it's kind of looking at those signals as well. So could we at least-- could you at least tell us whether it is a Panda penalty? It definitely feels like one, but maybe it's not. I guess the good part is, we are rolling out an update to the Panda algorithm at the moment. It's something that's more of a slower rollout, for technical reasons, but there is an update happening there. So if there's something from Panda that's affecting your site based on how we evaluate higher- or lower-quality content, then that's something that should be reflected in the future as well. So that's, at least, from that point of view, that's happening. Good. So it's fair to say that it has a Panda given that given the Panda update, we are actually seeing a small-- a hit? As far as I can tell, that algorithms did at least think that it was not as high-quality as it could be. So that's something that will probably be updated as Panda rolls out, but I don't know which direction it's headed at the moment for your site specifically, since this is such a slower rollout. Fair enough. And then, maybe, could we, offline, take, maybe, looking at an example that would, from a human standpoint, the issue with the quality? It's just that, one of these things that we're getting the exact opposite feedback when humans look at the site. And I'm open to the group also climbing in and saying, we're seeing, oh, you know the pages-- the content is not original. I mean, we have been cut and scraped like crazy in the past few years. Plus, we've had a history a few years back of doing certain things that were not right. And so maybe the combination of those two things are throwing off. But definitely it seems like there's nothing worthy of a sitewide penalty. But maybe we're wrong. I don't know if I can give you specific examples there. We tend not to do that for algorithms when they review a site, unless it's something really obvious that we could point at-- where we can say, well, your whole page is hidden text or something like that, or for technical reasons. That's something where we might be able to give examples. But when it comes to quality algorithms, usually we can't give any specific examples from our side. But I think you're taking the right approach and taking a lot of this feedback in and trying to collect as much feedback as possible. So I'll definitely see if there's something specific we can point at which is easy to recognize which might not be specific to just this quality algorithm. But I can't guarantee that I'll have something specific there. I think in the past, you had suggested the idea in these chats to do site audits where webmasters volunteer their sites. And I'm wondering, is this one where it would be helpful for the community to understand other things that are important, that maybe are not so obvious? Because you're definitely not-- tell me if you agree with that, but you definitely don't land on the site and you're like, oh, this is a Spanish site, right? At least from my clicking around on your site, I'd agree with you on that. But it's been a while since I've taken a look at the details there, so it's just hard for me to say that, from my first impressions, our algorithm should be doing this or this, because we do take into account a lot of different factors in these algorithms. But I have passed that on to the team to kind of take a look there. But I don't know if I really have anything specific to point at there. Maybe it's something where, if we do a site-cleaning type hangout, this will be something where we could pull out some examples. In the past we've been able to do that. So I'll definitely keep that in mind. OK, thank you very much. Sure. John, can we step in with another question, or do you-- Sure. Want to take the-- OK. On 13th of July, Google started sending out emails about hreflang implementation. Well, I've taken into consideration one specific website I've been working on, and I cleaned it up like seven months ago, and all the examples I find out in Webmaster Tools are turned to be wrong. I mean, I have like 4,000 hreflang tags with errors, and I took them one by one, and they all are just fine. But Google, for some reason, doesn't seem to pick them up. Should I sent you an email with specifics about this? It might be useful to have the details. So one thing that I've seen on a number of sites is they implement the hreflang in the right way, in that it links to the other pages, but they implemented it in a place where we can't pick it up. So for example, if you have a head section on a page, and within the head you have a JavaScript that writes something to the document, then theoretically the head section closes when it processes that JavaScript. Oh, I see. No, I've implemented it in-- --kind of below there will get ignored. And it could be that maybe you have a no index error hreflang below that, and then we would ignore that because it's not in the head, and it would be almost a security issue if we were to pick up metatags from the content of your page. Oh, I see. No, I've implemented it in the XML site maps. All of it. So I'll send you the details later. OK, sure. So what my question was, do you know if Google takes more time now that it sends out these emails until it completely recrawls all the websites with the implementation? No. It's picked up essentially immediately when we crawl and index those pages. OK. So as we crawl and index the pages and see that the implementation is correct, we'll reflect that in Webmaster Tools. It might take a few days for the data to be visible in Webmaster Tools, but that's where we show it. So seven months is out of the question. Seven months sounds like there's something that we're still not picking up properly. Yeah, all right. OK. I'll get back to you with an email later. Thank you. OK. All right, let's run through some of the questions, and then we'll open it up again for more questions from all of you afterwards. My site got a message in search console about having JavaScript and CSS files blocked. I use a CDN for all my files. I've read that Googlebot doesn't crawl these. How can I allow Googlebot to crawl these files and still have them on a CDN? By default, we do still crawl files on a CDN, so it's not that we would prevent-- block crawling of them in any way. They're essentially crawled like any other content. But I've seen some CDNs have a robust text file, and they block crawling with that robust text file on the CDN. So that's something you might want to take a look at. I think in your case, the CDN is actually using your main site's robust text file, so you might want to double check that it's actually allowing crawling of those files. But there's nothing that blocks us by default from crawling files that are hosted on a CDN. I wanted to know if there's a specific way to declare that we're doing aggregation in some pages. I know about the Google on/off tags. So first of all, the Google on/off tags are for the Google Search Appliance. They don't have any effect at all in web search. It's only if you run your own search engine with the Search Appliance that they would have any effect. And in general, what happens with aggregation is, we recognize that it's super big content. We try to ignore that in search. And it's mostly just a problem if a large part of your site is actually just aggregated content-- if there's no unique value of your own. And that's not something where marking up the aggregated content will make any difference. Because if users come to your pages and they see all of this aggregated content, no unique content of your own, then they'll still feel that this is kind of a lower-quality scraped, aggregated site. So from that point of view, it's something where you don't need to mark up aggregated content. You just need to make sure that your website has great content of its own and doesn't just rely on aggregated, scraped, pulled-together, rewritten content. Can you crawl, fetch and index protected pages so that a general user can find information when they use our Google custom search on our site and only display the results on our results pages? I took a quick look at that, and Google's custom search engine crawls the way normal web search crawls. So we have to be able to actually see that content in a public way to index it for this custom search engine. If you want to keep your content behind authentication in some kind of protected way, you'd almost need to use something like a Google Search Appliance, which you can run on your site, to kind of index the content within your website, or intranet, or wherever you have that. Discovered about 10 sites that have completely copied our site's content, images and text. What do suggest on something like this? If they link to us as all pages as the original source, is that fine, or do we need to take them down via the DMCA? The DMCA is a legal process, and I can't give you legal advice with regards to what you should or shouldn't do there. I imagine this might be something where it could be relevant, so you might want to talk with someone who can give you some legal help in that regard. In general, if we recognize that sites are copying content, we try to, kind of, just ignore that in search and focus on the actual unique content on those sites, not on the copied content. So that's something where we should be able to get that right in search. But if you feel that there are legal reasons to kind of block that content from being published or shown in search, then maybe get legal advice to see if that makes sense in your case. John, can I just ask a follow-up on that? Sure. I've been advised in the past that the one thing that might work in that case is using a canonical tag, but using the same page as canonical. So rather than saying the real page is over here, you canonical the page itself, saying the real page is this one. And therefore, if someone scrapes the entire code, they take the canonical tag as well, and it basically tells you exactly what you need to know. If they scrape the page like that, sure, that's definitely an option. I see a lot of scrapers who change the URLs that they find anyway, so they turn the canonical into their canonical, which doesn't really help you that much. And if it's someone who's copying the content out of your site and just placing it on their own site, then they're not going to copy any of the metadata with the page. So sometimes it can help. It probably only works against the really, let's say, incompetent scrapers who we would recognize as being irrelevant websites anyway. Recently received many messages in Google Webmaster Tools Search Console that Google can't access CSS and JavaScript files. Why does Googlebot need to access these files? What kind of data is Google looking for in these files? So, we're not looking for anything crazy in the JavaScript files. We essentially just want to render the pages like they would render in a browser. So we want to see what users are seeing. We want to make sure that any content you're pulling in with JavaScript, that we can also see that, so that we can rank those pages appropriately in search. And to some extent, that also plays in with the mobile-friendly guidelines that we have, where if we can recognize that a page is mobile-friendly, then we'll be able to show that appropriately in search. We'll have the mobile-friendly label. We can give it a slight ranking boost, those kind of things. And we can only recognize if the page is mobile-friendly if we can look at it how a browser might look at it. So for that, we need access to the CSS and JavaScript files. And this isn't something that's really new, so there's no algorithm or ranking change that's happening here. It's really something that over the, I don't know, maybe half a year, year now, we've been recommending that people let us crawl the JavaScript and CSS files for these reasons. And obviously, if we can't access the JavaScript and CSS files, then we can't tell that you've done a great job on making your website mobile-friendly. We can't tell if there's content that you're pulling in with JavaScript that we should rank your website for. So these are things where we've given out a lot of information on what you could be doing to make this better. We've added tools to Search Console about that, and now we sent out an email blast to let people know that, hey, this is really something where you could improve your website by letting us crawl these pages so that we can recognize what we should rank your website for better. I think maybe the messaging was a bit confusing, and I saw that a lot of sites were being flagged or had this message sent that have plug-ins that pull in maybe those small things with JavaScript, and those were blocked. I've noticed a lot of those plug-ins actually updated in the meantime, and if you have your WordPress your other CMS set up to auto-update those plug-ins, then that's something where probably this issue doesn't play a big role anymore. But I'd definitely take a look to see that we're at least able to recognize that your site is mobile-friendly. You can use a mobile-friendly test for that. It also gives you information about the block URLs. Yeah, question. Nothing? OK. Maybe we can get back to this in the end as well, I guess. In order to benefit from the small SSL ranking factor, would you put the certificate on the entire site, or just the checkout process? From my point of view, you might as well put it everywhere. So we use-- we look at the URLs that are actually indexed in the way that they're indexed, and we check to see if the SSL is working there-- or the TLS, the new name is TLS-- to kind of determine whether or not we should use this as a small ranking factor there. And this is per URL. So it's not that we're specifically looking for a checkout process and saying, well, only this needs to be secure. It's really per URL. And if these are landing pages on your website, where you're getting traffic through search, then we will use that there. So it's definitely something I'd put across the whole website. I don't see much of a reason to kind of hold back on not putting it across the whole website. Obviously there's a lot of technical work in implementing HTTPS across a website. You have to make sure that all the embedded content is also HTTPS, all of that. So it's not something where I'd just switch it onto HTTPS and hope that everything works. I'd really try to go through and make sure that you're doing everything right. How can affiliated sites rank well? Does Google trust them? Are they tips? What should we do? So, of course, affiliate sites can be really useful. They can have a lot of great information on them, and we like showing them in search. But at the same time, we see a lot of affiliates who are basically just lazy people who copy and paste the feeds that they get and publish them on their websites. And this kind of lower-quality content, thin content, is something that's really hard for us to show in search. Because we see people, maybe like hundreds of people, taking the same affiliate feed, republishing it on their website, and of course they're not going to all rank number one for those terms. We really want to see unique, high-quality content. And that's something that we do like to rank. So it's not something where we per se say that an affiliate site is bad. We just see a lot of bad affiliate sites, because there are people who try to find a fast way to publish content on the web. Which, I mean, sometimes being fast is also good, but you really also need to make sure that you have a really great website where people want to go to your website to get the information that they can't find anywhere else. And if you have affiliate links on a website, that's great. That's not going to be something that we would count against the website. Case-sensitive question. Our company is shifting from uppercase or mixed-case EN-US to lowercase in the URLs. The canonicals are currently lowercase. The internal links are mixed. What should we do? In a case like this, the thing to keep in mind is that any kind of a URL change is a URL change. So we have to recrawl and reindex that and forward all the signals that we have to the new canonical URLs. And that's something that can take a bit of time. If this is all within your website, then that's less of a problem, because maybe we'll still have the old version indexed, maybe we'll have new version indexed, but regardless of which one we send people to, they'll still make it to your website. So that's generally less of a problem. And what I really just recommend in a case like this is making sure that you're as consistent as possible with these URLs-- that ideally, you have a redirect set up from one version to another, that you use a canonical link in the same way consistently across a website, if you use hreflang that you use that also consistently in the same way, that all internal links are in the same way, so that when we look at your website, we really have a clear signal saying, well, these are the old URLs, but everything is focusing on these new URLs, so Google should just focus on these new URLs as much as possible. And that really helps us to make sure that we kind of follow your preference. It's not that you're going to have any kind of a ranking change because of a URL change like this, but that if you have a strong preference for one of these versions, then let us know about so that we can actually help you to get that implemented. Does Google use analytics information to decide about a website authority? If not, how does Google measure a site's user experience factors? We don't use any analytics information at all for web search, for crawling, indexing, or ranking, so that's not something that we take into account there. I do think sometimes some parts of Google use some aggregated information from analytics with regards to kind of understanding what's happening there. And that's usually with-- there's I think a setting in analytics where you can say, I want to allow to share my data. But that's not something that we'd use directly on the site level for crawling, indexing, or ranking. I noticed Googlebot crawls nonexistent pages after the end of a pagination series and throws server errors. This happens from the last page in a series. For example, page URL?page=34. And then Google goes to page URL Page=100, which doesn't exist. So one thing here is that this is a really common failure mode on a lot of websites, in that they'll have pagination set up for pages that don't actually exist. So especially on long lists, we've sometimes seen that you can go to page 100 and it'll have a Next and Previous button. You can change the URL to 5,000, and it'll still have a Next and Previous button. You could just continue clicking the Next button until you're at page 9 million, or whatever. And that's something where Googlebot is happy to click on links on your website, and it'll continue clicking on those links until it finds something interesting to index. So if there's really no next page for a series that you have, make sure you don't have that Next button so that we actually stop crawling there. So it's not that we recognize there's a number here and just try random things out. We really clicked on that at some point, found those URLs, and we were trying to recrawl them. So just try to make sure that you don't have Next buttons that lead to sections that don't really have any data. That happens in lists. Sometimes it also happens in calendars, where maybe you go to the year 9 million and still get a calendar entry, saying, hey, there are no events here, but maybe tomorrow. So those are the kind of-- we call them infinite spaces, where Googlebot needs to recognize that actually there's nothing interesting to find here. My question is Panda-related. First of all, why have you guys decided to update this rollout so slowly? And second, can you tell us how is this update different from the other updates? Is it targeting more than just content? This is actually pretty much a similar update to before. For technical reasons, we're mostly rolling it out a bit slower. It's not that we're trying to, I don't know, confuse people with this. It's really just for technical reasons. Here's the same question I think. We have a lot of content-- a lot of on-page content at the bottom of our page. We use Click to Expand links in order to improve the site design. Do we need to remove those links in order to improve how Google sees our pages? Does this affect our rankings? So this is the old question about hidden content on a page, and what should we do there? So what generally happens when we recognize that content is hidden on a page like this is, we just don't see it as being primarily important content for this page. So we'll crawl and index the page. We'll find this content. If someone is searching specifically for it, we'll still show that in search. But if someone is searching for something more general, and we know this is actually not directly visible on a page, then we assume that this is not the most relevant content for this specific page. And we won't focus on that so much. So with regards to the question of should I leave it or remove it, that's essentially up to you. If you think that this is interesting content for users to additionally have when they go to your pages, then maybe that's fine to keep. It doesn't cause any problems for your page like this. But if you think that this is content which is really critical for your website, which you want to rank for, then that's something that you might want to place in a way that it's actually visible directly on the page. Or maybe you want to kind of move that content into a separate page, where you say, well, this is a lot of information-- additional information. It's critical for some users, though, so I put it on a separate page and let people get to like that-- let Google index it like that so that it's directly visible. Could you tell the difference between a 302 and a 303 redirect, and if there would be a case where you would use a 303 redirect. So these kind of specific redirect questions come up from time to time. From our side it's actually pretty simple in that we only differentiate between permanent and temporary redirects. And we try to recognize which type of redirect matches best. So if you're looking at some less common type of redirect, and you're wondering how Google treats this-- well, it's basically just either a permanent redirect or a temporary redirect. With a temporary redirect, we try to keep the redirecting page in our index as a canonical. With a permanent redirect, we try to keep the redirect target in our index as a canonical. And what can also happen is if we recognize that a temporary redirect is actually pretty permanent, then we'll treat that as a permanent redirect as well. And there's no fancy, let's say, page rank passing problem there with temporary redirects. You can have temporary redirects on your website. Sometimes there are good reasons to have temporary redirects. It doesn't cause any problems for your website. It's not that the page rank disappears or your ranking drops. It's really just a question of, should we index this URL or should we index the other URL? After getting hacked, we had to take down the site for a day. About 200-plus 500 errors showed in Search Console, now completely fixed and running smooth. How long will it take to regain the rankings we had before we had to take the servers offline for a bit? So I guess first of all, if you recognize that this is happening, and you have the ability to do this, I strongly recommend using a 503 result code for all requests that come in from search engines or from users. With a 503, we know that this is a temporary situation and you're working on resolving this as quickly as possible. And Google will say, well, fine, we'll just come back and look at it again in a couple of days. Whereas if you take the server down completely, or if you serve 404s, for example, then Googlebot will come and say, oh, looks like this website doesn't exist anymore. These pages don't exist anymore. I'll drop them from my index. So if there's a way that you can serve a 503 when your site is hacked and you're taking it down for maintenance, that's really the best thing to do. That way you shouldn't really have any impact with regards to crawling-- your rankings, at least, or your indexing. Of course, if you leave the 503 for a longer period of time, if you take your website offline for a couple of weeks, then we'll assume that this is a permanent state and it's not just something temporary that we can look over. With regards to this situation, where maybe 500 errors were showing, or the server was down, that's something where once we recrawl those pages, we'll be able to take that into account again and index them-- rank them as we did before. So it's not something where we kind of artificially hold a website back, but it's more of a technical issue that we have to recrawl those pages, recognize they're OK, and put them back in our index together with the old signals that we had. To some extent, we try to recognize this kind of failure when we see it happening, and keep those pages in our index anyway, just because we think maybe this is temporary and the website will be back soon. So some of that might have actually worked here. Some of that might be that we recrawl those pages a bunch of times and they drop out and we don't have them for ranking anymore. The good part here is that if we recognize that a page is kind of important for your website, we'll generally crawl a bit more frequently. So if it drops out of the index because of failure like this, then we'll generally recrawl it a bit more frequently and bring it back in a little bit faster than we would with some random page on your website that never changed for the last 10 years. So my guess is something like this, where if you had to take the server down for a day, you might see, maybe a week, two weeks-- at most, maybe three weeks of time where things are kind of still in flux and settling down again. But it shouldn't take much longer that that. I think you're muted if you're saying something. You got me? Yeah. OK. Sorry about that. Anyways, you know my site, and we got hacked, and it's been the worst week ever. But the thing about it is, is we were able to finally figure out that we were hacked, because we didn't even realize that it was in MySQL. There was like a billion database entries and it was killing the server when we'd have, like 500, 600 users at any given time-- server would just [SNAPS FINGERS] drop out. Anyways, it happened, and we couldn't get in to make it a 503. We were just in hurry mode. Anyways, everything's fixed. We use SiteBlock now to try and help deter that. We're also using CloudFlare kind of stuff to stop that stuff from happening. But the pages are still there. The inner pages that I'm talking about, they're still indexed. They just went from, like, number one to number-- page four or five, or bottom of page one. So they're still indexed, it's just like all the inner pages have just, after this happened-- another note, though, is we noticed that there was a security hole in our whole theme. So we have also updated our theme to a brand new theme, so I was wondering if maybe design maybe is playing a role in us dropping rankings basically on every inner page that we have. Like I said, a lot of those were sitting right at number one. And now they're either page two, three, or four, or the bottom of page one. That's my question. They're still in the index. Now, it's hard to say because it kind of depends on how the pages were hacked and what was happening there. So, especially if they were still indexed like this, and if someone went through the database and injected a bunch of-- I don't know-- hidden links, or spammy links to other sites, or spammy content, even, like a parasite hosting on your website through the database, then that's something that can theoretically affect the ranking of those pages, because suddenly they look very spammy, and then we think, well, oh, something's crazy here. But usually that settles down very quickly as well. But-- so what I'd recommend there is if you notice that this is still happening, maybe next week or something like that, then feel free to send me some URLs where you know that they used to be a lot different in search, and they're not showing up at all anymore. So I can kind of take a look there to see what happened, or if there is something even still happening there. So sometimes, for example, we'll see people hack a website and cloak it so that only Google sees this problematic content, which is really hard to diagnose on your side, because you don't see it. So that's something to kind of watch out for. Maybe something like that happened and you fix it in the meantime. Those might be things where maybe we point you at something technical that happened or that's still happening, to kind of see what's happening there, or maybe it's just a case that our algorithms need a bit of time to settle down again. OK. OK. All right, thanks. Sorry to hear about your hack. Oh man, it was the worst thing. I mean, we got over 1,000 people on the site because something went viral, and all of a sudden, the server just says nope, I'm done. And then we're all freaking out, hey, we're going viral, but now the servers are all down. Oh. Terrible timing. I mean, getting hacked is never fun. Finding it quickly really helps. I remember my site, way in the beginning, used to get hacked every now and then, and it's-- I always found it worse when you look at it after a month and you realize, oh, it's been hacked the whole time and I didn't realize it. But it's always frustrating. Mm-hmm. Panda has been run many times over the past few years. This Panda run has been set to be a crawl lasting several months. Does that mean it's moving slowly for one pass over the internet, or what's happening here? So it's not that we're crawling slowly. We're crawling and indexing normal, and we're using that content as well to recognize higher-quality, lower-quality sites. But we're rolling out this information in a little bit more slower way, mostly for technical reasons. Does absolute URLs in a website navigation improve the crawl rate by lowering the Google crawl budget? No. So absolute or relative URLs, they don't affect crawling as long as we find the same URLs. Sometimes it makes sense to use absolute URLs. For example, if you know that your server automatically serves different variations of your website, like www, non-www, then having absolute URLs within your website for the internal links really help to make sure that we pick up the right version to show in search. But it doesn't affect the crawl budget. It's not something where I'd say you're trying to-- a big bump from going from absolute to relative or back. If we really don't-- [INAUDIBLE]. Yeah. On the question before last, does technical reasons for Panda doing a slower rollout, would that include receiving feedback-- aggregate data feedback, quite likely, during that process, which takes time? I don't think so. So you mean, like, webmasters giving us feedback about their sites? No. Like, feedback directly through either Chrome analytics or any other aggregate data feedback that is coming through, which is-- like, sometimes you mentioned we might look at this change and see, well, more people are clicking here or there, so our results must be getting better or worse. No. So this is really just an internal, technical problem that we're rolling this out slower for. So it's not that we're making this process slower by design. It's really an internal issue on our side. Thanks. According to RSS board, the elements linked title and description are mandatory in an RSS site map, whereas according to Google, link and pub date are enough. Will there be crawling issues without title and description? From our point of view, you can focus on that URL and the date if that's all you want to provide, but in general I'd recommend, if you're going to provide an RSS feed, make sure it's a well-formed RSS feed that can be reused for other things as well. So it's something where, if you're already going to do the work to kind of create this URL on your website, then you might as well do it in a way that works for all other systems that process RSS feeds. So from that point of view, from Google's side, of course, you can limit it to the minimum that we recommend. From a general web platform point of view, I'd really just recommend making sure that it works as a normal RSS feed for anything else. How long does it generally take for Knowledge Graph markup to be reflected in KG? Corrected KG logo contact information on our client's website three weeks ago, but the change hasn't been reflected. Markup tester recognizes the markup properly. I can imagine that there might be situations where it takes a couple of weeks for this to be processed. So three weeks, I'd say, is probably around the borderline case where sometimes it can take this long. Sometimes it can take maybe a little bit longer. Sometimes it takes a bit shorter. I'd recommend maybe giving it another week or two, and if you still don't see any changes there, make sure to send us some examples, some of the URLs where you made these changes, so that we can double-check to make sure that our processes are actually picking this up properly. We saw that cached pages URL for our website showed a 404 for a day, and later it showed up the next day. I asked other experts about this. Can this be a temporary issue, or is it something I need to check on my website in terms of indexing? If these pages are still indexed properly, one way you do that is just do an info query or a site query in search, then that would be fine. Sometimes they're just cache pages that disappear on our side, where we don't show them at the moment. That's completely normal. That can happen from time to time. If you're sure that your pages are still indexed properly, if you're sure that there's no technical problem on your website, then that's something I wouldn't necessarily worry about. Is there any update on Google blocking the spamming traffic that can mess up your analytics data? So I haven't heard a lot about this recently. On the one hand, I know the analytics team is working on resolving that. On the other hand, I haven't heard a lot of complaints about that recently. But I've also been on vacation, so that might have something to do with that. So my guess is maybe the Google Analytics team has been working on this to kind of improve the situation. But I know they're working on cleaning that up in some ways. I don't know the details of what specifically they're going to be doing, though. Panda-- is Panda running on a page-by-page basis or on a sitewide basis? We do try to run it on a sitewide basis to kind of recognize lower-quality, higher-quality websites. Which also means that if your website has a lot of lower-quality content and some really great content, you should make sure that maybe the lower-quality content isn't that much, or that you block it from indexing if you know about that. I noticed recently that there are many hackers intentionally-- there are many hackers intentionally hack to get a backlink from the gov-- oh, probably from government websites. Can you explain? So I guess this kind of points at hackers that are hacking government websites, or any other kind of websites, to try to get a link there. And that's, of course, something that isn't really a good practice, because you're hacking your government's website-- chances are they'll get kind of upset about that. I don't really know what else to add there. Essentially, they're hacking a website, they're leaving unnatural links, and both of those things are kind of bad-- not something we'd recommend doing. And if this is something maybe a previous SEO did, then I'd definitely work on getting that cleaned up. Let's see, a bunch of more questions here. Is there any way to track traffic on an internal page of a mobile app? I don't know, but I think there are analytic solutions that help you track usage within a mobile app. But I'm not really the expert there. There's also an app Hangout in a couple of weeks-- indexing. App indexing Hangout, yeah. I think from an app indexing point of view, that's something we probably wouldn't cover there, because we're looking more at how you can make pages from your app visible in search. But the tracking side is usually something that you'd implement within your app to see where are people coming from, what are they doing within my app. How do I report spammy SEO campaigns? You can use a spam report form, as always. If it's something bigger, where you think this doesn't really fit into a spam report form, you can also email one of us directly, send us a note on Google+. We can't promise to respond to everything that we get, but we do review them and we pass onto a spam team. We have a huge drop of indexed pages within the last 60 page. So I start with SEO Works, submitting a site map, and using Fetch as Google, but there's no difference yet. Can I get a hint to fix? Usually-- so I guess there are few things involved here. On the one hand, if you're looking at the index pages with a site query, that's probably a really bad way to get a count of the index pages, so I wouldn't recommend doing that. If you're looking at the index status information in Search Console, then you need to keep in mind that it's specific to that variation that you have verified. So if your website goes from www to non-www, or has some pages on HTTP, some on HTTPS, or if you have multiple domains with the same content, then the index status information will be specific to that version that you have verified there. So you'd need to, on the one hand, make sure that canonicalization works, so that one version has your content indexed, and on the other hand, maybe double-check the other variations to make sure that you're not missing anything from a bigger count. The site maps index count also works in the same way, in that it focuses specifically on the URLs that you have specified in your site map file. Usually if there's a drop like this, it's hard to know offhand if this is a technical issue-- maybe your website is serving no index on some of these pages-- or if this is actually expected. Maybe we crawled your website and indexed it with a bunch of different session IDs, and we indexed thousands of pages, but actually the content on your web page is maybe just 100 pages. So that's something where you would expect this number index count to go down over time as we recognize that we basically got lost on your website and accidentally indexed a bunch of pages that don't need to be indexed. So it's hard to just look at that number and say, oh, you need to do this. But rather, you need to look at a bunch of things, from technical issues to canonicalization, to make sure that you're really doing things right there. So John, is submitting to index, then, in Fetch as Google, is that not guaranteed that you'll-- I assume it's guaranteed that you will index, but not that you'll keep it for very long. Exactly. Yeah. All right. So it's not guaranteed that we'll index it. I think maybe like 99% of the cases we do index it directly. But we also need to be able to kind of keep it for the long term, and just indexing it once doesn't necessarily guarantee that we'll keep it for the long term. John, is it possible that if you have too many comments, and you have them paginated and fully indexable, that that might backfire? I know that the-- that Barry says that the only change he made when he recovered from the latest Panda update was that he made the comments JavaScript only. Was that the-- do you think that could be somehow linked, or-- because he didn't make any other changes, he's claiming. I don't know specifically what happened with Barry's site, so it's hard to say, but in general, we do process JavaScript nowadays. So if the comments are in the HTML, or the JavaScript, it might be that we're just treating them exactly the same. With regards to paginated comments or not, that's something where you kind of have to take a look at the quality of the comments there and make a call based on that. So I wouldn't say that a large number of comments is always great, or that a small number of comments is always great. But you really need to keep in mind that we see this as a part of your page. And if this is high-quality content, if this is something that helps a user make a decision one way or another-- for example, if these are reviews, where people are saying, well, this product is great or this other product is great, and this is why, and here's some detailed information on how I use this-- then that's something that can be really useful. On the other hand, if these are just comments saying, hey, visit my site here, or visit my site here, cheap shoes, then that's something that we'd say, well, this is kind of low-quality content. Where-- when we index it for your website, it's probably not what you want to stand for. What if it is somewhere in the middle? So it's not very insightful. It's definitely not spammy. It's legitimate comments, but it's not the most insightful comment also. A lot of them you won't read them and be like, oh, wow, I had never thought about that. But if it's something like that, so it's just normal discussion happening, with most of them I would say, since most people are not experts, not anything earth-shattering being revealed, or some great insight. Or sharing their personal experience. What would you say-- and they're not reviews. It could be either way. So it's really hard to say there. What I would do-- try to do it in a case like this, where you have a lot of comments-- I think in general having people engaging with your content is a good thing. But what you might want to think about doing is finding a way to recognize the really great comments in there and highlighting those on the page, and making it possible for people to click through to the rest of the comments, but maybe, depending on the situation, blocking those from being indexed, for example. So kind of like when you go to a product page on-- I think Amazon does this-- you scroll to the bottom, they have some reviews there, but these are usually the reviews that for one reason or another, they recognize these are important reviews. Or they kind of give a balanced view-- maybe some positive, some negative reviews. And all the other reviews are available from there as well, but you have to kind of click through to see those. So maybe that's something that would make sense there, so that you really, when you look at your pages, you say, well, all of the comment-- all of the content I'm providing on this page, including the user comments, is really exactly what I want to stand for. This is high-quality stuff. This is something I feel proud to provide to anyone who wants to visit my website. OK, that's very helpful. And is it fair to say that Panda is not-- doesn't have a link element? Or is there a link element on Panda as well? So if people place, like, spammy links on a website or something like that? No, your inbound links on the site. I don't think so. OK. So it's all about the quality of the content of the site? Yeah. OK. I mean, we have other algorithms that look at the quality and the quantity of links that are going to a website, and sometimes that plays a role with other things, but I don't think with recognizing the quality of the content, we would need to look into external links there. OK. Helpful. Thank you very much. All right. We just have a few minutes left, so I'll just open it up for all of you. Can I ask another question, John? All right. I remember-- I don't know if you remember I emailed you a while ago about the number of links going from our old site to the new within Webmaster Tools, and that was around the 100,000 mark, and it's now gone up to 200,000, when there's no links at all between the two sites. And within Webmaster Tools, every single one of those links is, when you click through to see what they are, they're all via an intermediary link, every single one of them. And so I'm thinking it must be from the time when we had the old site 301ed to the new. So is it possible that Google can still be seeing a 301-- you store the fact that there was a 301 in your system, and even though we've updated the site months and months ago, you're still, somewhere in your memory, you're saying, well, I know that there's a 301 in place to that page. So I'm still going to consider there to be a link on it, even though there isn't and hasn't been for months. I would tend to see that more as a reporting problem in Webmaster Tools, or in Search Console, where maybe we're not showing the newest data that we should be showing there, and-- But then why does it update with more information that's even more wrong than before? Don't know. [LAUGHTER] It is confusing. We shouldn't be doing that. Yeah, we shouldn't be reporting something like that. I'll double check with the team on that later today. OK, thanks. John, regarding in the Webmaster-- actually, the Search Console-- have you've given, or has there been further consideration recently about extending the three months of data to a longer time period? We discuss that all the time. We always-- I don't know. I think they're getting tired of us pushing for this. But we do look at it with the team to kind of see what we can do there. To some extent, it's kind of balancing different priorities. On the one hand, we'd like to offer new features as well. On the other hand, people like to have the old features expanded a bit, more storage. Adding more storage there isn't that trivial in the sense that we can't just, like, say, oh, well, we'll just double the amount of storage there. We really need to kind of then rethink how Search Console uses this data to make sure that it's still snappy to use, that it's not-- it doesn't get bogged down, like, turn into something really slow. So that's something where-- It's not more problematic than analytics? It's-- --considering the amount of data it has to store? It's not that it's impossible. It's not that I'd say people, or the engineers, can't just do that, but that it's a nontrivial amount of work to actually implement that. It's not just a dial where we can twist and say, oh, well, maybe we'll just add like 100% more, and that'll be OK. It's kind of like, if you-- I don't know if you have a-- I don't know. If you need storage within your database, and you go from like 1,000 users to 1 million users, then that's a big step. That's something where you can't just use the same setup that you have and say, well, it'll work well with twice as much or 10 times as much storage. You really kind of have to rethink your design when you do that. And analytics has done that earlier, or maybe from the start, I don't know. And they've kind of planned for that. On our side, we made a product decision in the beginning to say, well, we have our three months, and people can download the data if they want to keep it longer. But it's focused on those three months. So it's something where, when we talk with the engineers and the product managers, they're saying, well, should we put more storage here, or should we add this really awesome feature, you know? Tell us what we should do. And of course, our preferred answer is, do everything. But there's a limit to what can be done. So we kind of have to focus on individual parts and make a decision on either adding more storage, adding this feature, building out this other feature, working on an API. All of these things have to balanced somehow. Is how people might misuse or abuse that data also a significant consideration there? In other words, they may see, or think that they see-- understand more about the algorithm data or different updates, et cetera? For some kind of data, that could play a role, like if we added a lot more information for links, for example. With regards to how we treat those links internally, that might be something where we fear that people might misinterpret or misuse that information. With regards to specifically the search queries, I don't see that as being a problem. That's, I mean, the search query data, if you download it now, you'll have it next year here as well. And just because you've downloaded it now doesn't necessarily tell you more about our algorithms than if you didn't download it now. All right, thanks. One thing I suspect will happen is, when we finalize the API for search analytics-- for the search query information-- that maybe some people will offer some tools to kind of download this on a regular basis so that you can look at it offline on your side. Or that you can keep it on their server, or on your server, and kind of have this aggregated collection of longer-period-of-time search analytics information. With the new API, that's a lot easier than it is with just downloading the CSV files now. So I suspect some of that might happen. And if you're a programmer and you have access to the API beta, then that might be something to try out in the setup. But I don't think, from our side, we'd be offering a significant longer period of data for the search analytics information, at least not in the initial drop. You know, you can load it into Google Docs and stuff the data quite easily and use it like that. I mean, if there was even some way that we could, even from within the Search Console, automate a couple of steps of anything like that-- Yeah, I mean, I knew that. I mean, we can do that with a API already. Yeah, yeah. But if something like that might be available within the Search Console-- I mean, you can do that currently with the download links. Just push that into your spreadsheets. That would work. But with the API, you can, I don't know, have a web app where you just push the button once a month or that automatically does it once a month and pushes this data into your Google Spreadsheets, or whatever you're using. The tricky part there is, of course, it pushes the data in a list, or in whatever format that you give this data. It's not that you'd have the same UI, where you can filter and pull things out and compare that easily. But I'm sure there'll be people who have innovative solutions to make that possible. Yes, certainly. All right. Let's take one more question. We're a bit out of time, but-- let's. Hey John. Since I jumped in late, I hope I can get the last question. So actually, it's two. One, just to ask you if you got my email regarding the spam report. I sent you that, I think, a week ago. Just to know if you-- it got your way and maybe you forwarded it to the [INAUDIBLE]. It's probably on my vacation list somewhere. [LAUGHTER] OK. And can confirm whether you read it, when you read it, so, that would be fine. The other thing was regarding-- my client had a bread crumbs issue regarding the home element on the displaying [INAUDIBLE]. I actually managed to solve that by using JSON-LD. So I guess this is something maybe for people who are struggling with this issue, where the home element isn't fully visible on the page, or [INAUDIBLE] or something like that, JSON-LD seems to work fine as long as I also mark up the home element. I tried without marking up the home element and it didn't work. So I did it with the home element. But one issue I saw was that if you have a multilanguage website, and your home link is, like, /EN for the English version, the bread crumbs will show, like, www site .com, brand name, then category, because it thinks that /EN is a soft category of some sort. So [INAUDIBLE]. Yeah, I've seen something similar with other cases where you have that kind of a structure within a website. Sometimes with site links, as well. So I suspect it's something that will take a bit of time to kind of get ironed out completely in our search results. Yeah, well, I used the home element just as a normal domain name, and that seems to work fine. OK. Good. Good to know, yeah. I think more and more of the structure data elements will be possible with JSON-LD, and that makes it possible to do some more, I guess, advanced markup, like you're doing there, so looking forward to seeing how that will roll out. By the way, regarding a site link search box, how long-- so if a website already has a site link search box, but with no markup that would send you to the site column domain name search, how long from when you implement the JSON-LD markup does it get picked up and send users to the search results page? That can sometimes take a couple of weeks. OK. So it's not as fast as just recrawling the homepage. We really have to kind of reprocess it specifically for that, and that-- that whole process takes a bit longer. And if the site is using a Google Custom Search thingie on-- inside its website, so would that be a problem, or should-- No. That's fine, yeah. OK. All right. With that, let's take a break here. Thank you all for all of the questions that you submitted. It's been really interesting to get so much feedback here as well, and to have so many live questions here too. I'll set up the next couple of Hangouts. I think the next one currently is for app indexing. If you do have an Android app, if you're implementing app indexing, by all means jump in there. For more webmaster, web search Hangouts, I'll set those up maybe later today so that you can start adding questions there, too. With that, thanks again, and I hope you guys have a great weekend, and maybe see you in one of the future Hangouts. Bye everyone. Welcome everyone to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I am a webmaster trends analysts here at Google in Switzerland. And part of what I do is talk with webmasters, publishers, like all of you and try to answer any questions you might have around search-- around your websites, those kind of problems. I see we have some newish faces here in the Hangout at the moment. Is there is something that's on your mind that you'd like to start off with-- a question maybe? Hi. Hi, go for it. Can you hear me? Yes. Hi, John, thank you. I've got a question. We received quite recently-- for the last three months this message from Google Webmaster Tools, or the Search Console should I say, that we've got too many URLs, basically. We've got an e-commerce website, which lists dynamic links generated from faceting. But we tend to control all those links. It's a message we had a long time ago. We've been blocking most of those URLs through robots noindex, nofollow or through the robots text, we have been blocking them as well. And we also configured the Webmaster Tools to block those URLs. Now, we've received this message again-- these messages. I'm checking the examples, URLs, of those messages. And none of those pages do index. Basically, they are all noindex-- noindex, nofollow. What should I do? So this message comes before we actually check those pages. So essentially when we crawl your website, we discover all of these new URLs. And we don't really know what is on those URLs yet. This is usually a sign that something within your website is generating all of these URLs. And that's making it harder for us to understand which URLs you actually should be crawling. So especially if they have a robots noindex on them, we have to crawl those URLs before we can see the noindex. I understand that. But let's say it's one which is blocked through robots.txt, you shouldn't get to that stage. It should be blocked before. Theoretically, we should notice that and not flag those, yeah. So if you want, you can send me the link-- maybe drop it in the chat here. And I can pick it up afterwards and double check our triggering on that. I know this is a bit of a misleading message sometimes in the sense that maybe you are doing all the right things. But it's still showing up. From our point of view, it's really something where we discover a lot of these new URLs on a website when we crawl it. And this is the kind of a situation where, oh, it looks like something completely broke on the server or maybe there is a type of navigation within the website that generates infinite URL variations. And those are the kind of situations we want alert the webmaster about. Maybe you're doing everything right already. And we should have recognized that better. One more thing-- I was thinking of the implementation on the site. You put a rel nofollow on the links themselves. That can help. That would help us? OK. And what is the danger of these messages with respect to Panda? These messages are purely technical. So this is purely a technical issue on your website. It has nothing to do with how our quality algorithms would view your websites purely with crawling. I'll try it there. Thank you. All right. More questions from those who are new to these Hangouts. What can I help you with? Nothing special? OK. We have a bunch of-- yeah? Can you hear me, John? Yes. Great. Yeah, I've been watching the Hangouts for a while. I worked for a company called Columbus Direct. And we were hit with a Penguin penalty back in 2012. The website has been doing some pretty aggressively link building tactics for the 10 years preceding that with good success. And we've spent the last three years really trying to remove links disavow. And so the penalty-- we got the notification that the penalty was removed more than 18 months ago. But our rankings haven't recovered. So I was just wondering whether you could give us any tips on how we can move this forward. That's always hard because you almost have to take a look at the individual situation of the site there. So that's the .com version. Or do you have different versions? We have different versions. But it's the .com version is the main-- it's for the UK market. And that's where the marketing effort was focused. So I think if you had unnatural links, for example, on your website and that was a problem and maybe there was even a manual action there, then that's something that can still linger from an algorithm point of view in that our algorithms are still kind of worried a bit about those unnatural links. So I think that's probably what you're seeing there. So we've engaged a number of agencies to remove links and to build up the disavow file, which has now turned into a kind of a mammoth thing. And I'm not sure if we've got any natural links left almost. And certainly we are a very large player in UK travel insurance market, but not next to the people that are dominating the front pages in today's results. It's the aggregators, of course, and some of the bigger insurance players. And I was wondering whether we would ever make it back-- whether we should continue to pay people to remove links-- whether we should be looking at focusing in other areas or what you would suggest. Well, I think this is always a tricky situation because you don't really know exactly what you might be doing there that would help your site in the long run or that would help in the short run. I think, in general, if you've worked hard to clean up all of these bad links -- to put them in a disavow-- to remove the ones that you can-- then that's probably the right step there. And moving forward, I just continue focusing on your website to make that the absolute best. So instead of focusing too much on links now, I try to find a point where you can say, well, I've cleaned these up as much as we can. There might be still some back and forth, but-- Because we'd put so much work into cleaning them up and there was manual re-inclusion because it was a manual action that we were notified about and natural links. I think from the business's point of view, when we had the message saying that the manual action has been removed-- but 18 months later, we are still in this kind of penalized position. So whether we've been penalized by algorithmic factors as well and how we could kind of identify what the problem that's still on the-- Yeah, so in a situation like this, it's certainly possible that some of our web spam algorithms are still picking up issues there or were picking them up when they were last updated. So that's kind of a situation where I try to find a point where you can say, well, I've cleaned up all of these web spam issues-- these problematic links as much as possible. And just from here on, I just want to focus on making the website better to really kind of build up the natural strength that's otherwise existing within the website. So instead of focusing on things that so far behind you that you've already definitely cleaned up, I'd really just focus on the future. OK, thank you. I think there was one question in the chat. "On a .com domain with English and German content, will the German content benefit from English links and vice versa?" Yes, of course, if this is one website and you have different kinds of content and some kind of links, then that's something where, generally speaking, the website itself will grow in value over time as you gain links-- as you gain popularity-- those kinds of things. And the main thing I'd watch out for there is really making sure that you put these German and English content on separate URLs so that you don't have one URL with both German and English maybe side by side, but really separate URLs for each language. John, more follow up on that. What you have a CCTLD-- so a Romanian website-- and you get links from international English-based websites. Would those have a lesser effect than usual or would they be ignored? For example, some people wrote about something I did, but in English, they say, look, the remaining version is over there. But it's in English. It's from an English website. I don't see a problem with that. I think that's perfectly fine. That's not something you need to artificially channel to an English version of a page or something. I think that's perfectly fine. That kind of situation happens all the time. You also have that between different versions of your own website. So if you have an English version of the content and a Romanian version, you'll probably have a link back and forth saying this is the English version-- this is the Romanian version. And that's perfectly fine. Right, but in that situation you know, especially if you have hreflang that this is the same entity, just a different version of it rather than an external type. I'm curious if it's a less effect if it's coming from a different language, for example. Because the users-- that link might be that, well, I don't know that language. So it's not as important. No, think we treat that just the same. The main difference I guess there is that the anchor text might be different. If we see a lot of anchor text going to one page with the specific text, we might assume that page is about the text. And if that text is the wrong language, then that might make it hard for us to figure out, OK, which of these pages should we rank for this English query. Should we rank this Romanian page or should we rank an English page we might have that confirms that on the page. That makes sense. Can I touch base on Dan's question? Sure. OK. I know that part of his problem might be that the links that he removed are what was helping him rank. So that might be part of the reason why he's not ranking or getting back to where he was. But my main question is as you said that a lot of times after you revoke a manual review, the algorithm will still be itchy on that URL. How long does something like that last? Sorry, I didn't understand the last part. Oh, I was just saying you also said that at times after a manual review is removed, the algorithm will still be touchy on the actual domain because it's like, hey, this was a manual, penalized site. And how long will something like that take for it to be like, OK, it's cool? It really depends on what all happened there. So I guess the main difference when it comes to manual actions and algorithmic changes is that for manual actions, we really focus on the links that you have visible in webmaster tools or in the search console. And if you clean those up, then from a manual action point of view, we'll probably say that's good. You you've done a good job cleaning this up. And we will resolve that. And from an algorithmic point of view, we take a look at everything that we found, which might include some things that aren't directly visible in search console. So that's something where if you have a few small things that just aren't shown because they don't match that threshold that we use for search console, then that's fine. But if there is this giant mass of links out there that are way below this threshold in Search Console, then those can also add up. And that can be something that algorithms respond to because they see, well, it's not that there are individual links that are really visible that we should show in Search Console. But there's this giant mass of these small things that are hidden around the web with paid links or whatever they are. So that's kind of the difference there and how the algorithms see things compared to how manual actions-- how the web spam team would see that directly. But it's also not the case that our algorithms have any kind of a grudge where they would say, well, this had a manual action. They cleaned up all the problems completely. But because they had a manual action, we will kind of devote it for a while anyway. That's not the case. It's really that they're both looking at the current situation. And algorithms sometimes have a bigger set of data to look at. But they also might not be running as frequently as maybe a manual web spam review might go through them. So just because a manual review has been revoked does not mean that it could be completely good to go? Yeah. So if the manual review is kind of OK, then that's, at least from a manual web spam point of view, that means we're happy there. That doesn't mean that all of our algorithms are going to say this is a fantastic web site. John, what do mean about this threshold that you mentioned? Can you expand a bit on that? So we try to show the links in Search Console that we think are relevant to your site. It's obviously a sample of the links that we have. It's not just the full set of links that we have. So that's what meant there. It's not that there is a magic page rank threshold or something like that. But we try to show what's relevant to the webmasters-- what they might be interested in. And that's a [INAUDIBLE] threshold or something like that. But we try to show what's relevant to the website-- what they might be interested in. And you see that-- mute you guys-- so you see that, for example, when you have one site that has site-wide links to your website. So if you have two websites, and one of them links from every page to your other websites, then you'll often see that we know about [INAUDIBLE]. But we show [INAUDIBLE]. So you see that, for example, when you have [INAUDIBLE]. I am going to mute you guys. Someone has an echo or is watching the YouTube video at the same time. So that's kind of what I'd was looking at there. But is it the case that you might miss or might not display any link from a single domain? I noticed that you usually show one, two, three, five links even if it's site-wide reality, there are a 1,000. But is it the case that you might miss entire domains. So you have five links from a website. But you don't show any of them? That's possible if we don't think that they're really relevant. So it's not something that I'd say that the average webmaster would really notice because maybe this is like a link on some site that nobody ever visits-- some spammy site that exists out there. But it got indexed. But it's not really something that we would say, well, people need to know about this. Right, I actually [INAUDIBLE] was doing the disavow file for my penalized client. And I noticed he gave me a list to all of the directories he submitted his site into. I noticed that a lot of them weren't in the Search Console part. So I added it to the web file anyway, but I just thought it was. Yeah, so usually what I do there in a case like that is try to find the pattern. So if you know that they submitted that to directories, and they used the same title or the same description, then search for that separately. So just do like normal Google searches and try to find those directories explicitly-- maybe like directory and then the title that they submitted-- something like that-- to find those as well. Yeah, that makes sense. John, following up on that, you said that the links that you find in there-- Google finds them relevant or there is some reason for them to be in there. But surely they can be good or bad. As Mihai was saying, if you're using Search Console to find bad links. But you're saying there are some that you should probably-- if they are not in there, don't worry about them. Are you advocating using external tools then to find bad links? Surely relevant goes both ways. Relevant is good or bad. Sure, it can make sense in some situations. I think for the most part, when webmasters are looking at things like a disavow file, going through cleaning up old issues, then on the one hand, they didn't know what they were doing or they should've known what they were doing and can guide you towards that content like with the directories in this case. And on the other hand, usually you'll find the patterns in the links in Search Console. You'll see there's a bunch of directory sites listed here. So maybe there are more of that I could clean up as well. And, finally, if they're really not listed there, and it's something where you suspect there might be some other ones out there, then it might also be that they are so small that you don't really need to worry about them. But adding them to disavow file is basically just more work than you gain from actually cleaning that out. Right. But if we find links within Webmaster Tools, they are relevant for something, whether that's good or bad? It can go one way or the other, surely, because otherwise you wouldn't need Webmaster Tools to clean up links because they are all positive. We try not to do a big quality analysis before we show those links in Webmaster Tools. But sometimes we just have to categorize them like that-- show some of these things in Webmaster Tools because we think they are relevant to the webmaster. And some of them we think, well, this is a sample that we've already shown you before like, for example, the site-wide links. And that's something that you don't really, really need to happen. I think especially the site-wide links is one of the reasons, for example, we recommend using a domain directive in the disavow file. So instead of focusing on those individual URLs that are listed there, try to find the pattern that matches all of that. And disavow that on a broader basis. I know it's not always easy. But you guys are the experts on these things. So it's good that you know a little bit about the background there, I guess, and sometimes you do have to make judgment calls and think about, well, is it worth digging even deeper into this hole of all of these tiny links that might still be out there. Or is this basically just time I should better be spending on making my site even better and saying, well, at some point, I have to make a cut and say I have to leave the rest of these small problems behind and focus on really creating more value on my site in general. Surely you benefit, Google benefits, and every user benefits from you showing the ones that you want us to work on. There's no point in giving us a set of links that have no bearing whether it's good or bad. Otherwise, they are not helping to clean up the index or report bad links or to help you use disavow files for future research. You're just getting junk back? We try to show those links actually independently of disavowing and bad links/good links situation. So we try to show them because these are potential traffic sources to your site. And we had the links featured long before we had the disavow problems there or the disavow files. So it's not something that these links are provided in Search Console just so that you're completing them out. It's really for the site owner in general that these are potential traffic sources to your site. And some of these might even be nofollowed. Some of them might be disavowed already. So it's more than just a web spam a check for your site to look at that. By the way, if I could give a couple of examples that I've noticed that you show and don't show. For, example, I noticed you don't show my grandpa was building some Blogspot websites brand-new and was putting some articles with links for them. I noticed you don't show that which is pretty normal because those weren't existent a day ago. So it's normal that they wouldn't even bring traffic back. But you do show some popular directories, even if it's not a good link-- a natural link. You do show it because maybe it might bring some traffic back even though it should be visible. Is that a good assessment? Both of those cases probably fit. Yeah. It's really not the case that we provide this link list as a means of cleaning up web spam issues. But rather, we think that a lot of sites might want to see where they are getting linked from and where they might getting traffic from. So that's essentially the background information there. But let's go through some of the questions that were submitted because people have been voting on those as well. And I don't want to leave that out. We have this question from Barry. "In the German Hangout, I am told you said the Panda refresh is coming this week or next. What did you mean? Did you mean it still can even later than next week?" So a few weeks ago, we said that this refresh would be happening in a few weeks. And so it's coming up. I doubt it's going to happen this week because it's a holiday. And it's Friday. And people are busy doing other things. But I imagine in the next couple of weeks, this is something that might be happening. So we have to set expectations. We try not to provide an exact date when these things rollout because things can always change in the meantime. But I expect this to happen fairly soon. It will be a popular subject. Are you going to do maybe a special Hangout quality content and best practices-- things like that, maybe? I'm going on vacation next week so probably not. But maybe we can do something when I am back to look at these issues. If people have questions around that because I think especially the blog posts from Amit Singhal, which is now four years back-- "23 Questions You Can Ask Yourself about the Quality of Your Content." That's still very relevant. And that's still very important to look at. "Can Google crawl a hosted video and understand that the same video is hosted in more servers and published in more sites?" We do try to understand when something is a duplicate and treat it appropriately. So we do that with textual content-- web pages, for example. We try to recognize if something is a duplicate and filter those out when we show them in search. We do that with images where we can. And we do try to do that with video as well. So if you go and host your video on a number of different services, that doesn't necessarily mean that your video is going to show up five times instead of once in the search results. "If you have a site like youtube.com and under every video I put a button for the user who wants to download the videos he likes more, will this thing help me in Google Search. Can Google track this kind of user action?" If you make a site that works well for your users, then that's something we'll try to pick up indirectly at least. And people will try to recommend it more. But it's not the case that we are going to see what users do within your website because we don't really see that at all. So from that point of view, what exactly you do on your website is really up to you. But if we see that users really love your website and they recommended it to others, then that's something we can pick up. The pop up that's opening in another tab can ruin site positions and search results. So this is an almost obnoxious behavior if you visit a site and it just opens up pop ups. It's not something that I am aware of us picking up as a quality signal or using for ranking directly. But, of course, if you are obnoxious towards your users, then that's something that they might reflect in how they recommend your site. So that's again something more indirect that we might notice that people don't really like your site. They don't like to recommend it compared to other people's sites that they do like to recommend. John, can I ask a question on that? Sure. When you track-- actually, it's probably more of an analytics question where Search Console does the same now. When you track time on page or how long a searcher is on the landing page, do you track active window versus-- because let's say there's lots of sites. Let's say it's for cars or holidays or real estate where you're looking down on the map and you open, or I certainly do, open 10 tabs. I want to look at that one, that one, that one, that one. Then you work your way through the tabs because that's how most people browse these days. What about that 10th tab I've had open for five minutes without actually looking at it? So how long have I actually been viewing that page according to Google? The five minutes? Or the 10 seconds I look at it and close it? We don't use analytics data directly in search. So that's at least one thing that we wouldn't do there. In general, when we look at this kind of behavior, it's something that we look at on an aggregate level where we'll look at it for maybe algorithm changes where we see, well, people are generally clicking on the first result when we make this algorithm change. And people are generally click on the second result with this other kind of algorithm change. So maybe we're doing something wrong there. So on that level, we might look at that. But it's not something that we'd look at on a per page level. And I imagine it's also when you look at the broader public, they don't exhibit that behavior that you're doing there like opening everything up in tabs. It's only for a few sites. If it's the certain types of site whether you're looking at properties to rent or looking at cars or if you're trying to book a vacation and you open 10 hotels, I'm sure a lot of people do that. But it only effects certain industries you would imagine. We look at this data more on an aggregate level when we look at overall the algorithm changes across all types of searches. So it's not something that we say, well, this would affect your site. People are opening it up in a tab and looking at it later or comparing it to other tabs that they have open. So that's not something I don't think would ever really affect something there. "Does an exact match URL still boost your rankings compared to a non-exact match URL when looking at a brand-new site?" As far as I know, no. So on the one hand, with exact match domains, so in general, that's a practice where you put the keywords for your site into the domain name and hope that your domain name ranks better because it looks like it matches those key words. But as far as I know, that has no effect at all in search at the moment. It used to be maybe looking back what was it? Five years or longer that did have some effect. But at least at the moment, that has no effect at all. Is it still more important for rankings to place your keywords in the first 100 words of your content? Or does it not matter anymore? I suspect that never really mattered. So we try to understand what these pages are about. And we look at the whole page if we can. There's obviously a limit to the size of the page that we can download. But I think that's around 10 megabytes. So if you have your content within those 10 megabytes, then we'll be able to recognize that and show that in search. "We're using parameters for campaign tracking like utm_source? Is it duplicate content if parameters on each site are unique, but the content is the same?" So technically that would be duplicate content in the sense that you have different URLs with these different parameters leading to the same content. In practice, that's not a problem though. So on the one hand, we recognize that this is exactly the same content where we can filter it out even before we index those pages. On the other hand, if we do manage to index those pages separately, for example, if there's a date or timer on the page or the sidebar that's dynamic, then we'll try to filter those out when we show them to the users. So that's not something I'd really worry about there. The other aspect is if we find those parameters just within your normal analytics code, for example. Then that's something we'll try to filter out on our side anyway. We'll try to recognize that. If we find those parameters in normal links within your website or external links to those pages, then we might try to crawl and index those pages like that. But you can block that by using the URL parameter handling tool to let us know that these parameters are actually totally irrelevant. And chances are if you use analytics anyway with these parameters already that when you go to search console to the URL parameter handling tool, it'll already say we recognize these parameters and these probably aren't that relevant for your site. "Does using keywords in the domain-- is that still bad for SEO? Will I get an EMD penalty for using the keyword in the domain name?" As I mentioned before, that's not really something I'd worry about here. With regards to the EMD penalty that was mentioned there, this is something that we mentioned a while back where we are seeing a lot of sites use exact match domains with really low quality content. And really the combination of having really low-quality content on these exact match domains is what the problem is for us. It's not so much that your keywords happen to be your URL because a lot of sites have keywords in the URL-- a lot of brands have their brand name in the URL which makes sense. And just because that matches what people are searching for doesn't mean that it's automatically bad. So can I touch base on that? Sure. So basically what you're saying is if somebody owns like, example, tools.com. If you have crap content, then, yeah, it might be looked at and maybe pushed to the back a little bit. But if you own tools.com and you have good information about tools, quality content, and user interaction, then having the tools.com will actually help you rank a bit better because you have the EMD? Is that what you're saying? No, no. It's not that you would rank better because of that exact mention of a name. But it wouldn't cause any problems for your site. OK. So just because you have a great site and the keywords for your site happen to be in the domain name doesn't mean that we're going to penalize your site. So EMD has nothing to do with ranking at all anyway? No. OK. And we've seen this a lot in situations where someone will go off and buy green tools, brown tools, yellow tools, garden tools, cartools.com-- all these different domain variations and essentially put the same content up. And that's something that we were trying to target with that specific update. So if these are really doorway sites essentially with low-quality content, then that's something that we might take action on. "Anchor text with internal links that are not part of the navigation bar-- does the text matter if the same text is used too often? Is it penalized?" No, that's absolutely no problem. You can link within your site however you want. The bigger issue that we sometimes see is that people try to use this as a way to create a sitemap page within their content where they'll have their main content on top. And on the bottom, they have this big chunk of text with keyword-rich anchor text links to individual parts of the site. And it's not so much that these links would be unnatural. It's just that it looks like keyword stuffing. So we look at that page. And we find this big chunk of text on the bottom with just keyword, keyword, keyword. And that looks like keyword stuffing and our algorithms might look at that and say, well, I don't know if I can trust this page. So it's not that those links are not natural. But it's just that you've stuffed it with so much text that we don't really know what's actually relevant on these pages. "If you 301 redirect HTTP to HTTPS, will there be a slight drop in ranking similar to what happens when you move from one site to another?" Not really. So this is not something that you would see in practice or where you would see any kind of a drop. So we try to treat HTTP and HTTPS sites as similar as possible. And if you move from one version to another, then we'll try to just forward all the signals that we have there. "What's the quickest way to get Google to read a disavow? Do you have any tips on structure or content of this disavow? Should there be more or less detail?" So we read the disavow file right when you submit it. There's nothing special that you need to do there. What I would recommend doing though is using the domain directive as much as possible. So instead of listing individual URLs, try to list the whole domain. That saves you a bit of time because you don't have to chase all of those individual URLs. And it also makes it easier for us to process it because we don't have to match all these individual URLs. So that's maybe a tip to watch out for. Past that, it's really up to you. You can leave comments in a disavow file if they help you. But we don't read them at all. So these are the processed by our systems automatically. If there's a comment in there that you want to give Google, then the disavow file is probably a bad place to leave that. "Googlebot found extremely high number of URLs. I think we talked about this briefly before. When using a CDN for speed, the number of images show up indexed even though we have set up the rel canonical." So I guess there are two aspects with regards to images that are kind of relevant that you'd want to watch out for. On the one hand, you need to make sure that your web pages point to the images directly as much as possible. On the other hand, we tend not to recrawl images as quickly as web pages because usually they don't change and usually they're pretty big. So what happens is if you change the URLs of your images, then it takes a lot longer for us to actually find them. So as much as possible, really try to keep the URLs of your images as the same as long as possible. That means if you're using a CDN, try to maybe use the same URLs as you had before if that's possible. If you're moving from one host name to another, then just keep in mind that it takes a lot longer for us to process that when it comes to images. So definitely make sure you don't have any kind of session parameters in the image URLs. Really try to make sure that the images URLs that you have specified on the web pages match exactly the ones that you use for hosting the content that you won't want to have indexed. We also don't use rel canonical on images directly. So you really need to either redirect those image URLs if you're moving from one site to another or at least make sure that the web pages really point at the actual image URLs. Another aspect we sometimes see is that people will set up a CDM that's kind of like a round robin set up where you have different host names where the images are sharded across different servers. And if you do that just make sure that you're always playing at the same hosting for each image so that it doesn't happen that we crawl the web page. And we see one link to version A. The next time we crawl that page, there's a link to version B. The next time we crawl it to version C. But instead, really make sure that the URL you use for the image is as static as possible and that it stays the same. Well, what we did is we were trying to get every page to load under two seconds. And the only way that we accomplished that with a 2 megabyte page was to use a max CDN service. So we set that up almost a year ago now. And the images are still showing in the Google Webmaster Tools area. And it's about a year that that's been set up. And we haven't made any changes whatsoever. But the CDN itself-- we made it where it's cdn.oursite.com. And then, of course, the string. And that's what we've done. Is it crawlable? Is there a robots.txt in place maybe? Yeah, they actually said for us to use the noindex, nofollow that they offer because you don't want to run into duplicate content issues. So should we actually take that noindex, nofollow robots off of there. Because I can see what you're saying. It's not getting it because the robots on that CDN is saying don't index this. So maybe try taking the index off? We just didn't want to get in trouble for duplicate content. You know what I mean? I wouldn't worry about duplicate content there? Because if we recognize it's exactly the same as we've already seen, that's a technical problem for us. We just try to fold those versions together and treat them as one. But if we don't have any version all, then-- Yeah, this just happened. So I'd definitely look at that. And maybe what I'd also double check is in Search Console-- in Fetch as Google-- you can look at the rendered view of a page. Really make sure that in the rendered view, the images also show up so that it's really kind of a check to see that Googlebot can actually access those images. And if there is no index in the way, then actually we should be able to index those too. John, regarding fetch and render, there was actually a product forum thread about somebody asking that if he uses j-query to generate a rel canonical tag I think that was. But it doesn't show up in the code when he does the fetch and render process. And the rendered image wouldn't show anything else in the head portion of the HTML. Does that mean that Google doesn't trust this? It's tricky. If you're doing something like that with JavaScript, then chances are we'll pick it up. But it's not guaranteed. So if, for example, for any reason we can't process the JavaScript file, then we'd crawl the page without the rel canonical. The same thing happens if you use JavaScript to add noindex, for example. So it's a situation where I imagine most of the time we'll get it right because we'll be able to render these pages with JavaScript and see that JavaScript adds a rel canonical or noindex to the head and we can use that. But if for whatever reason we can't process the JavaScript file, then we have that version without that extra metadata. And if that's a problem, then you might want to prepare for that and just have some kind of failsafe within the HTML. And one more thing regarding that. So the code that is say in the fetch and render result before you render the JavaScript and the after? Because that person would have never seen that rel canonical tag in the code. Yeah, so with the fetch and render, we show the screen shot of the rendered version. And we show the HTML source that we actually downloaded. So you wouldn't see if there's a rel canonical that we would render with JavaScript. I guess one thing you could do is set up a test page that inserts rel canonical and then uses JavaScript to try to read that out and display it in the big text so that you could see it in a screen shot. But usually we have no problem finding that kind of rel canonical when it's generated with JavaScript. That was my suggestion. All right. "We're currently receiving a large amount of referral traffic from spammy domains. Will this analytics data affect our rankings? Would do you suggest blocking these domains in HT access such as filtering and analytics?" I can't speak for the analytics team directly. I know they're aware of this problem. And they're working on resolving that. One of the things to think about there is a lot of the spammy traffic that I have seen at least with my sites is from sites that haven't actually visited my website. So it'll look like referral traffic and analytics. But it's not actually from someone who visited the site. So by using htaccess to block them on my server, that wouldn't necessarily block that from appearing in analytics. So you might want to double check that before you spend a lot of time working on an htaccess file. There are ways that you can filter this data in analytics though. So that's something you might want to look into. I've seen a lot of blog posts that have the instructions on how to set that up. "How much does silo structure affect rankings for e-commerce sites?" I don't really understand that question. I guess that's about the URL structure within commerce sites where you have a complicated structure sometimes. From our point of view, how you set up the URL is essentially up to you. It's not something that we'd say you need to do it like this or like that. Some people use path. Some people use dashes in between different parts of the URL. Some people just use the IDs that come directly from the database. And all of that works for us. What's important for us is that we can actually crawl your website and go from one URL to the other to find all of that content. And ideally, that we understand some kind of a structure when we crawl the website so that we see this is the main page. This is a category page. There are lots of detail pages here. Maybe some of these detail pages are related to each other and they have cross links. All of that helps. What doesn't work for us so well is if you just have one homepage. And it has a search box in it. And none of your content is actually linked from those pages. So if we have to make up random search keywords to try to search through your website to find those pages, that's going to be very tricky. But if you have a clear URL structure that we can click through and follow through to your pages, that's perfectly fine. How you structure the URLs themselves-- if you use different path elements or all one word with dashes, that's really up to you. John, I think the question mainly refers about segmenting your site into separate topics and the topics themselves don't really link from one another. So that's siloing. Like you have a TVs section and gardening products. And those two don't really link to one another thereby separating them. That's fine. That's fine too. If we can recognize those individual parts of your site, that's perfectly fine. A lot of people have this structure naturally. And they use maybe different CMS systems on the same domain. Maybe they have an e-commerce site and a blog. And they're kind of different CMS systems. So they don't cross link by default. And that works too. As long as we can really crawl from any place within the website to discover the rest of the website, then that should work for us. "If a person takes a hot link from my video or photo that I host on my website and publishes it in his website, is that counted by Google as a backlink for me?" I don't think we count that as links directly. But it does help us to understand the images better and to figure out how we should be showing these in image search. "As you confirmed in the German Hangout that Panda update is coming. Is this just for German or for all languages?" These updates try to be as general as possible so that they be for all languages. "How to implement site link search with a Smarty template? I tried using literal. But it's not working." You probably want to talk with someone who has experience working with this kind of templating language. So that's possibly not something that you'd find help for in the Google Webmaster Help Forums because you really need to have someone who knows that templating language and can give you some tips there. Actually, I used Smarty to implement that. And I didn't have any issues. OK, so talk to Mihai. Ha, you just volunteered, sorry. "In regards to 404 pages, how long does Google keep these 404 URLs in Google's index before they get removed, even if it's removed, can we use 301 to get those pages back to the index?" We do try to remove them as quickly as possible. With 404 pages, we might check the URLs maybe once or twice or three times just to make sure that it's really gone before we actually drop it completely from the index. But there's no fixed time frame where we would say after one week it'll happen because sometimes after one week, we wouldn't have even crawled that URL once. So it really depends a lot on the site and how we crawled that. "Can a drop down navigational menu affect my site's crawling, indexing, and ranking?" Sure, theoretically. So with a drop down navigation menu just like any other kind of navigation, it helps us to understand the site structure and to kind of follow through and find links to the individual parts of the site. And if you don't have links to the rest of your site-- to the rest of the site structure-- that makes it really hard for us to actually pick up and figure out what we should be showing here. So if you have a navigation menu that leads to all the parts of your site, that's perfectly fine. If that's a drop down navigational menu or a sidebar or whatever else, that doesn't really matter so much as long as we can crawl them. Sometimes what's tricky is that you'll use a JavaScript widget for this navigation. And if the JavaScript is blocked by robots.txt, then we won't be able to see that menu at all. Then we'll just see that you're crawling some JavaScript file. We don't know what's in that JavaScript file. So we can't use that to crawl the rest of your site. But if this is a navigation menu that we can pick up for crawling, then we'll try to use that to crawl the rest of the site. "For new websites that are just beginning the SEO process, is it better to launch the website even when it's not 100% optimized and finished or really make a complete site that's full and optimized with metatags and alt titles before launching?" I think this is kind of up to you. It's not the case that we would count it against the website if they don't have metatags or clean titles on their pages. We try to look at that every time we crawl those pages and take that into account. And if you want to launch now and you know that your site isn't 100% perfect. But you think people are just waiting for it and eager to look at your site and to recommend it to others-- maybe launching a slightly incomplete site makes sense. On the other hand, if you want to make sure that everything is 100% perfect before you launch, then maybe waiting until you're satisfied with this stage makes sense. So it's really up to you. Can I ask question on that? Sure. This is something that is like-- everybody talks about this all the time-- about new site versus old site. Does that matter? If I have a site that's 20 years old with the same length, same content, as a site that's a month old-- same length, same content-- does it matter if my site is 20 years old versus a site that's a month old? Because it's asked everywhere. Yeah, it's a very theoretical question because if your site is 20 years old, then chances are it has a lot more history than something that's maybe a month old. So saying that they are exactly the same with regards to links, for example, that's usually not the case. But it's definitely not the case that there's any kind of old site bonus where we would say, well, this domain has been up for 10 years. Therefore it should receive a ranking boost. That's not the case. So all things equal, the age doesn't matter. It doesn't really matter, no. And I guess one aspect might come into play is if it's a really new site. And we don't really have any good signals for that site yet. So it takes a while to understand that this is a site that has good signals where we see people are recommending it. It looks like something we should show in search. But if you're talking about something that's maybe a month old. And that gives us time to look at how that is shown-- how we should show that in search. So that's kind of comparing a site that's one-day-old to something that's 20 years old. It's really hard. But once you're past like a month or something, then we understand how the site should be shown in search. Let's see. "Can I include canonical URLs in site maps for SEO? For example, url.htm? Here's a duplicate of example two.com. I use this tag in the sitemap." I recommend trying to do that on the pages themselves or in the HTTP header instead of doing it in the sitemap. I don't think it's directly possible in the sitemap file, but I might be wrong there. I think with mobiles pages, for example, we do have the option of having the link rel alternate in the sitemap file. But I'd really try to keep that in the HTML as much as possible-- the canonical. "In Search Console, I recently received a message, add app property or URL error reports. But I have already added the app index app property. And reports come there. Is this message just a notification?" Yes this is essentially notification. So to back up a little bit, we recently added the ability to add Android apps directly to search console. So that if your app is indexed for search using the app indexing API, then we can show you information directly there if you have that app verified in Search Console. In the past, if we recognized that your app belongs to a website, then we would have shown your app information on the website in Search Console or rather within the messages that we sent to the website in Search Console. So it's kind of, I think, maybe a month or two ago we added that ability. So if you have an app that you're using app indexing for, make sure you have it verified separately in search console so that you actually get this information about clicks and impressions-- any crawl errors that we might have for the app-- all of that. "Is app indexing for app-only content already reflected in the search results? At Google I/O, Scott Huffman said that some of this would finish in a few weeks." I don't believe it's live yet in search. So app-only indexing means you have an app that has content. And we try to show that in search results so that if someone searching for your content, we can recommend to them if they're on a smartphone that kind of uses-- allows for this app-- then we can recommend that to those users directly in search. But I don't think that's live just yet. I know they're working on some of the issues there to make that easier. It's always a bit trickier if you have an app with content in it that doesn't have an associated website because then we really have to crawl through that app to actually get all of that content. And that's definitely not as easy as crawling HTML pages. All right, we don't really have time. But I have a bit more time. So if any of you have any questions, if you want to hang around a bit, let's chat. John, this is actually regarding one of the questions left. And I've noticed this in Romania as well. There are some sites that hide the content behind a pay wall, but do allow Google to actually crawl the content that would be otherwise accessible only if you pay. And so that content in Google is crawled. And when you search in Google, you see the result. And you see the meta description containing those keywords that you searched. But when you actually access the website, you don't see any of that. They ask you to get a subscription. And that's not one of the most friendly user experiences. Yeah, we classify that as cloaking. So you're showing users something completely different than what you're showing Google. And that might be something that the website team would take action on if you submit a web spam report. It's always a kind of a tricky situation there. What we usually recommend for sites that do want to have a pay wall or sign up flow is to look at the first click free setup where users going from search directly have the first couple of clicks free to actually see the content. And then they are confronted with the pay wall or the login page. So I could submit a web spam report or a feedback thingy? Sure. Yeah. I have two questions. OK. One is we-- as you know, I'm in the celebrity niche stuff. But we had another blog that we were doing about celebrities-- what it's going on in the news and that kind of stuff. And then we have another site that we do but found that we didn't have enough time to continue doing both. So both celebrity related-- so it's OK to 301 redirect the celebrity site that we were doing before to our main one because we want to focus on the main one instead of trying to separate both and put up crappy content versus one site with good content? Yeah. That's OK? That makes sense. If you can do it in a way that the new content is equivalent to the old content, that's perfectly fine. If you can do it in a way that-- It's closed. If you can find something like a page-by-page matching and do the redirects like that, that would be totally awesome. And that would be absolutely no problem at all. If this is the case that you're folding one side into the other and it's essentially the same target audience, then even just doing a site-wide redirect like that to maybe a home page is also an option. That's what we did. We did the htaccess. OK, and my other question is that you mentioned that site-wide links can be bad. What about the same scenario-- another person like TMZ is linking to us in their sidebar. But that gives us a site-wide link. But we don't really want to disavow TMZ because they are a strong company. So are you talking about the site links that can hurt you as like I'm a celebrities blog. And this is a blog about gnomes. No, I site-wide links aren't necessarily bad. And in a case like that, that's a perfectly fine situation where you say, well, this website is linking to my website from all its pages. That's a fantastic endorsement, I think. That's not something you need to blog. It's just that we wouldn't show all of those individual links in Search Console separately. So if TMZ is linking to your website, you'll probably see a handful of links from TMZ in Search Console, but you won't see all of the individual URLs from TMZ that are linking to your pages. So it's not that they're bad. It's just that we don't show all of them because we think the sample gives you enough information already. OK. One last question from anyone? John, I'll ask. I don't know if you'll answer if there's ever been any update change in our original issue from two years ago? I'd have to take a look. I can wait. I thought we looked last time, right? No you looked at the new site because we'd moved completely. But we still have 10-year-old domain that had that original mystery underlying problem. That's the one without the E, right? Yeah. And if it will ever lift because we don't want to close it because it's got 10 years of history there. If it comes back, we can then 301 to the new one and we should theoretically get that boost back. At least for the moment, I'd continue working on the new site. All right, then there's nothing we can give us in terms of-- because it was a mystery problem. So I don't know if that's something that you needed time to work out and whether it's been enough time. It was two years. I'd have to check with the team what's happening there. At least for the moment, it looks like it still is affected by this. Right. And this is-- affected by what? Affected by the algorithms that affect that. All right, if you wouldn't mind just asking again. Yeah, I'll check. I'll drop you a Google+ reminder as if you need one. Right. Mihai? Yeah, this is actually for one of the other product forums that I couldn't find an answer to. This is a URL-specific issue. I've noticed that if you use a site query on that URL, it shows that the URL is indexed. But if you're searching for in the title-- the title of the page or all in title, it doesn't show any results. And the person was complaining that whenever searching for that specific title, his page isn't showing the results. It's actually showing another page of his website that links to that page. And that's not very user-friendly for the searcher. It's a really interesting issue because it looks like it's indexed. But it's not indexed. It looks like it's indexed normally, actually. So why wouldn't it show up in title or all in title? I don't know. I haven't used those queries in a really long time. So I don't really know what we're looking at there. But if you look at the cache page, you can see that it's actually there. So from that point of view, it's not something where I'd say it's not indexed were it's actually picking that up. But I'd have to check watch what the title shows there. Even if you search for the quote title quote, it shows up a different page of his website that links to that page instead of actually showing the page itself. OK. I don't know. I'd have to take a look or maybe talk with the team about that. It's actually a mystery. If it shows for the site query, should it also show for the E URL query? Because it doesn't. They are slightly different. And the way they are processed internally is very different. So it can happen that it shows up in one and not in the other. What I usually do is either an info query or a cache query to see which URL is actually indexed because sometimes what will happen is we have www, non www. And we show it with the site query. But we don't show it with the cache query. According to the cache query, you'll see the different URL there. So that's something I sometimes use to double check is this specific URL actually indexed? Well, I left you on Google+ the forum thread. OK, I'll take a look. That's a weird mystery in that case. All right, so we've come to the end here. I'll be setting up the next Hangout. But next time I think I'm on vacation. So it'll probably be in three weeks-- something like that. That's sad. Yeah, things will go on. Don't worry. All right, so thank you all for joining. Thanks for submitting so many questions. And hopefully we'll see each other again in one of the future Hangouts. Have a good vacation. Have a nice vacation, John. Thanks, John. Bye everyone. okay well have you on today's Google Webmaster Central a hard thing I'll send my name is John Mueller I'm pathogens almost at Google and search and part of what I do is harbor master is in publishers like I love you Island it looks like we have some new faces fantastic a any of you who have been been coming to this has really have any questions feel free to go ahead up to give you a chance to as the first questions guy hi hi John can you hear me yes hi I'm go ahead prolonged technical I'll all and its to transit was that it was not just about us I wondered if you could if there's any way that you can kind of have a quick question see if there's any way that it is the coverage because it's really hard to tell and missed the first part of your question ok sorry I so basically be done love come to disavow work on the website I've been working on so well now and we haven't come to see me and not have an increase in confirmed that what kind of recovering I was wondering if there's any chance that you could have a look and give me an idea is so-and-so it's a web sites where you see my home naturally as they don't have a manual action is %uh yeah we didn't get money election arm I A can take a quick look but chances are I can't really East see anything specific that watcher what's actually happening so what ok what might be happening is that has some of our website algorithms are picking up this user and that they have been updated and ok Mozart something where even hear you focus on cleaning up a lot of the things those reforms need and work on our society a corrupted so I got a little something where you might be seen a delay from there you should still be seen some of the changes from like just generally working website let's use the temperature on some improvements but you won't feel free to drop to your own child and i cant take a look afterwards or we have any time in between and see he also pick up yet now I will I'll message on the chat it was just gonna be interesting cuz I think back in November those love kinda Holmgren movement in terms as I'm stuffing activated them a bust I miss soco a huge increase your not period and then it the best again so it was quite difficult for us work out whether you know we are heading in the right direction site but yeah I'll jump on chat okay grand thank you sure but went on isn't it kinda your I'll /url manual actions are by usually more your keys in up well where now and bow there are what your you we both if somebody does a good job the bar on but on I who wouldn't say you can simplify it like a there there's lotsa room in between where II thing to two largest and having something that's automated makes it a lot easier lots and I'll better to handle because you can just fix the problems and I'll just automatically and remove things up in the air right direction and for sale are different algorithms Zach react in different ways so it's something where fixing a problem that s kinda I'll in pink are gone usually results in like a broader improvement across a website whereas with 90 actions it's really like its and locked out for movement so that's something where even there sometimes they're they're they're different scales where that website you might say well this is lighter really big well we really have to take action and make sure that these are exertion all and sometimes I just really small things but the what sentence is what we have to take a small targeted national action to just clean this up because we know it with massive might not really be responsible for me something that can do about that admits on third party sites you don't access to the don't understand these problems and those are the situations where we try to take targeted action in a way to just under reduced and effects complete right well I think but I'll or ahora a quien one being penalized by Penguin that there are no up and one being I'm about what action world all up the climb yes they actually right in the the whole directly hammer I'm impact that I want them II ministry what the member action in Black two weeks they were National Weather yet one of the highlights of her two years but affect meanwhile I'm a so its its it seems to be a lot are get 0 well automatic at least for the moment your you know mountain improve that go a who more pasta yeah I think it's something we we always work to try to find the right balance signs not not revealed it to get it cracked now I you don't have any in you like yea for the next a and when I nothing to greenhouse sorry min alright and maybe some other owning faces do you guys have any questions they didn't we can help with before we get started I'm party warrior sure I'll get here is eighty yes both okay okay I actually on run how it always tell my masters what we do I Chino a spare a local companies but also on some very very very high profile domains I'll and Wyoming is in the alright but don't match but not really geared towards that I'm my question is is were we have the hardest time ranking white at I'm he has so many properties they're trying for that match arms killing it with the black hat techniques by I'm I can give you an example using this lull I the largest wind which you can crash in Sac State all CDI as an example the first one that pops up can you DL well thats site is being hop on a black hat world s01 s ranking Kaiser is out there yet right corner tomorrow in that area and see that what we're network problem comes into that is that when people search for certain things like you know %uh Miley Cyrus sex /a she hadn't have sex day but yet we started out you know you're gonna have no always offer saying hey click here is right here you'll hear it is you know and it's false information I'm so I'm just wondering from your I'm you know what I you better to you right our site that's extremely high-profile I'm I guess it's always hard I'm I probably have to take a look at it details there II people included a link to that threads and mark questions right yeah they're talking about on that I i go there to see what's going on with her but they're talking about on their it's just crazy any headed by I'm you know my side and I thank my site the number one on this Saturday that is are the people that are doing to us that we're way a hi and yet energize me that is all all Centrum Asian and is when I bought the domain my domain I'm talking about you know we 86 eight years or that just the domain and what we did when we were trying to do is try and make it known unity just talk about and yes and explain what really is going on the edge and so that are saying this is difficult to to see how how about I know you guys twice that is very very much you know i mean yeah I think we do have people on the web Santina and watch africa that area as well so it's not something that we we completely ignore me say well they're just like or blackouts are already tagged I put my heart out writer I okay not the case yeah it's something we we do watch out for and we do try it i've wasted to take action thereby that there is a lot of blood activity I guess than their son really makes it hard to find out while what direction should be there right so that s I think that's that's always tricky with regards to sidestep basically targeted keywords for that doesn't exist yeah I think where is is a really tricky problem it's not not just in in the adult industry it's something that's across the board where he is search for something that doesn't actually exist someone is going to be out there that has the skewers on the page into going to rate for and if you look at the searchers also thank 00 this must exist though yeah results are exactly's happened it does not something bad I think we can under lock down completely because if you search for something and I like evil unicorns nah some random example where you think well you know like anything at that that's out there on that topic is just got made up in low-quality content maybe or just someone you know and creative writer but since we don't have any real conflict that we can show for those keywords we essentially kinda show the rest anything also catch upset her more I think that's probably at least to some extent your scene that at these ages keywords where there is no great content the show so we and went straight to the farm tomorrow and say well there is a danger that to have these keywords on a weekend really pouch for bandying fantastic web pages but because there's nothing better out there well its so yeah okay Outback munis we're getting frustrated as we do break we get handsy ration Jerry class lines although I passed out but I'll you know it's not me I and II was current now well I think you look at the thread from abandoning their handicap didn't ask the other here I i don't have my my site in NYC you can is fine i was just wondering would it not be advisable to you right up and information please that would suggest that not secure video doesn't exist and that they are hoaxes I'm right East is geared towards that he would third to talk about but that it doesn't exist and it is a hook simpson to yeah at nine way to go about at John I right hand side already tried ass because especially for a high profile names like you probably have kids typing these keywords to search is also every day trying to start something new yeah I ensure their sites trying to rank for this legitimately like that I I think add Zappos there's a really interesting job on this where he served as a host you arms they have a page on there official website saying well there are no Zack oh snap and get some back government reaction and that banks really well or I where you are these used it last time I checked so that something where an official site can really help I'll and information but it all the sites are essentially in official and you're talking about content that kind of gets leaked and doesn't our official Asian then its like you aren't is and/or bring me this like and the content to the web and since its only thing we can articles keywords the we end up showing that alright I let's head through the questions that were submitted and then we can go back tonight to discuss these topics I have a 3G sites for three different countries I use and hear these my content remains the same for all three sides is a products services are the same had to go see this is due to the top and when using inter-clan mentioning three versions Hal I yes a jet plane is a great way to tackle that problem you know I essentially what will happen is we find these three sites are with three different you know these will try to drink it our country local and what is your plan will make sure that we're trying to appropriate person to date using the right place it's not something where you would get penalized for duplicate content is not something well in general if these are your site it's not something we analyzer sites for duplicate content this so we see there's more as a technical issue wherein he cites harassing comments and that's hard for our own review and we have to filter them out in the search results when someone is searching and Disha one of the sites to users so it's not that to use a sharpie pen arise we just chilled and I'll just try to show just for and HR playing make sure that we show the person that really fits tests for or purchase terms %uh usually say that having more than one sir or what I your website a are you should you need us there me them working them both look for me this is special now you're using Asia backpack don't go okay so these are action or same I'm though you shouldn't worry about having like 10 you and YUM I'll the sandbar yeah I mean at some point is just a factor out like and manageability here having a lot of different web sites that cause problems then candidate would your efforts so with three US site I don't see a problem West and web sites are starting to get into the area where you know maybe it makes sense to just make one really strong dot com you may not happen and you persons from there and maybe there are legal or policy reasons for you have to say well they have to have different country versions because maybe my case with version is no official on the German version parents are something crazy like that sometimes these decent so that's something where I think he's if you use it for different healthy home have you both are all the major different country versions thats often a lot less problematic than if you kinda create separate sites for different parts of the same church so that you have my eye care lou garden furniture red garden furniture us the garden furniture that something right and heading towards like the doorway age scenario whereas if these are just different country versions of the same content and that's essentially something that that and it happens for organizational reasons and not something we worry about from West Ham are web search warrant occur okay so not it's not bad girl kinda hope you guys before that I'll or in up doc it reply help us a little bit there it doesn't pass a jerk so it's not the same as like redirector concentrating everything on your but it does help us to swap out to your house yes version perhaps more relatives it if you go right yeah so it doesn't think drinking and just make sure that we show the right person when we do short or I our business is related to tourism and car rental your domain that GR for Greece some years is there a way to add other countries to the International targeting and no at the moment you have a country on hot lanny then balanced hide two that's the city-country there are some exceptions which and we have in the house article on this hot day where their country code top-level domains are better generally use imaginary way where anyone can register intended use of the mains like that Co which is actually for Columbia or .tv which is acting career some island in the South Pacific at in these domains we do treat as generic top-level domains and you can set your targeting but for most country code top-level domains we really assume that this actually targets users in that country so hope for the website's around tourism like horrible where you expect people from other countries actually search for this content than that something you probably want to take a look at my generic top-level domain and put in some time content there so mabel the local content on your dodgy argument on the local version and then but that international content on hand generic top-level and you can take any other new top-level domains as well there essentially all seen as generic top-level domains from her or if you so if you can't get dot com near you can guess I don't know and %uh gurus probably doesn't make sense for a car rental site but that their handle these new dot other names out there that you can you can get pretty good and easy to recognize and easy to remember domain name from the holidays and online in line is the genie's hey how come out my nose is the G so that you'll see the content on the page out we treat St Jesus images essentially so its not something where we try to recognize a text that as the G's there's actually images which other image search and ask her if you want to put content on your pages and up out and article in web search you need to put it into that the normal tissue how page or display with JavaScript or and we have an insight where past then show up for work and when they're and this I also kicked out outside for of looking at the site when it is finished as up there twice a day is this the best this char yes that's a good way to handle this on one thing to come to mind sometimes all demands are also interesting other so that's something where you deleted right away if you show 410 right away then that and drops out a search is also very quickly I'm so maybe it makes sense is a well you need I could go for a couple of weeks handheld and it's actually removed from from page and the other thing to keep in mind is if you remove their names from the site crawling and reindex so it's almost better to lead in the site in to say it has a new last modification date when you change it you know index or before 10 because that way we'll go of control that age again and see what change will notice that it for amor no index or whatever and then we dropped our index fairly quickly so instead of just removing it from the site map I'll make it make sense to in the site on and then maybe after a couple of weeks dead rule from the site heart problems and earlier who started using HTTPS is a ranking signal but it doesn't seem to be really used a so that most web sites are still not running a ship UPS at his the signal already the system or is it going to take some time its definitely already in the system re-directing packard something we do use for ranking its not something where I'd say we drop all other web sites that don't have HTTPS you get this magical blows from number and number one just for using HTTPS so it's it's a fairly small signal I it does help us for example in situations where you at the same content I'm HTTP and HTTPS so we'll try to prefer HCPs fortune in a case like that at that and helps and I think I as saying that not all web sites are already using HTTPS is obviously true but we see more MORE web sites going towards a GPS I think Wikipedia recently mentioned they're going to https I read it is moving to a should he has EPS a lot about the bigger web sites are are changing over and I think this is a trend that's not going to be reversed so it's never going to be or probably never going to use action where where like shit people say well only had access this site insecure way because NY if you had a choice why would you not want to use a secure connection to West yeah after nineteen can we use alternate alternative tags on dynamically created your house even it an article on those taxes that a I'm understand questioner but essentially what you need to watch out for with annexing all or with airplane or any other kind of Mark A where you talk about this is your almost the same you use because isn't euro whore as much as possible so within your web site you late exactly the same way I use a rub knowledgeable in the same way user ager land-use at indexing alternate your your house all the same way so that you really have a strong signal and and understand this is the Euro format that you want to use this is the one will index in this is the one will use for all of these additional features that you might be associating with those your house so instead have a that kind of day your house with a question mark parameters and cervical your girls pick one up close and make sure you really using that consistently major using exactly the same euro consistent so watch out for or case watch out for things like session parameters and the Euro some all these days making life harder for us understand which one we should actually picking up for index and does Google have been home with them that's constantly run cited by West Ham I keep seeing sights hey getting I doing everything Google the same not to do after exact comments and paid all links I 2013 these practices still seem to be working we do have a bunch of Britain's that track or target specific and the west and an a lot of the than normal ALT and AST comments that am beings these are things that we simply ignore it. I mean we've been seeing these tactics the EU's for wrote decade longer and instead of trying to battle those all individually on a manual low week we do chart up with some samples there and sometimes us though slips through sometimes there are things that we need to improve I never say that such where hardly be immune against any understand our algorithms can always handle anything and they're always going to be situations where we have to manually take action to figure out what's happening there at always situations where we have to improve /url homes and I could be to change algorithms to act differently that could be too it that he's out with them to you so its not the case or SAT we solve the welfare problem and you never see me west anymore because we recognize everything automatically yeah there has always something I miss lewis there and where working on improving being so if you see these parasites would you seen something consistently where is a well this is so obvious Google's algorithms are smart programmer should be able to figure this out you're always welcome the pass it on to me I you can also use their stammer or form that goes directly to the Western you or something I get bigger picture problem that you see that something you can certainly said to me on Google+ or sent by email or or through my website for example as an Asian is a hidden good pet issues no index hollow tag on passionate pages even if we have a unique content on adjudicate ages and you don't have to block paginated pages from the and so if there's something on there that you think and index I'll by all means let's be in excess not something right so you need to talk we do have some recommendations for action 80 content you regarding terrell next appropriate yes and linkage when tanks at that something you didn't check out in the Help Center on numerous occasions on hang out as you stated that manual links penalties are bogus takes the hassle at his sights white I if I'm links penalty is a row the site still has the way for and refresh no was not the case at with my science please explain why I did not have talked so there obviously the also always a lot of things that happen in parallel so we have I think what over to our actors are used for crawling indexing and ranking and all these things happen in parallel so it's something where some of this might be involved here other aspects my pinball as well well and on other sites that might just be that one of the suspects all so it's something that there is no like one answer fits all with regards to what you need to do what you need to wait for when you change something on your website there's a manual action and idk what looked like a lot and week and usually drop that fairly quickly if there is or renewing our routing problem then back to overcome or reverse address muppet stands in some situations Gandhi combination of I mean johnnie it seem like you know everybody's talking about the fact that they got a manual you know links penalties specifically to the badges not gon stop and you know it took over a year and a half no once had been revoked three French to come around them on the free my site so no it just seemed it's it's seems completely wrong from nap her son to just get away with not when it when it occurs here in our even though we got remote on in May 2 2013 was not rates what time a 2014 I'm I I can understand that feeling but I think it's important to keep in mind that the signs are all different it's not something where you can say did exactly the same thing as you doing the same time and they saw something completely different so thats something where where the sites the have a very different history in there very different things that we got for my world my point of view may be very different things that we pick up for manual on review so it's sad its really hard to compare I I do understand their that and that general 2009 like well I had to do to wait much longer than they have to wait like an everyone be processed as quickly or why can't everyone like have to wait this long so that something more I think weeee always need to work on improving things there to make sure that it's consistent and it's fair or and sometimes there are things that we need to or are more with regards to making sure that it's really consistent near it's just a common question that comes up in others the beeper does fuming about it is it seems like the big brands are recovering very quickly I you wouldn't say that's the case there a lot of situations that don't get picked up by the SEO blog is here and their big brands that do have problems with my actions that you have big problems with are the changes and just because nobody is talking about and doesn't mean that they're they're not starting are to try to result physicians and so many is how to aids just as long as as other side so it's not I to their sanded it looks weird when you look at the press easy lol recycling collection here and I got a result here but there are lots of sites that don't end up in the past like that too at the same problems and they asked if I much longer to under really resolve the issue cell maybe they've been building up for years and years and/or finally it and I some that have been doing things wrong for such a long time no no remembers the ranking in two hundred nice remain domain keyword sign notes that it's very frustrating not being a I know issues on yeah mmm not I think some her which s2 battery mind gone or a grade me a finger jury but not welcome future are in some and best %uh baton the website but no no this but what they're yeah I heard that one before I think I am really do bring it up regularly with the team to see what we can do their arm also a policy incident to try to see what what what more information will be shown whereas it was out with years Wet Asses who are trying to do the right thing a chance to and if do the right thing without getting stuck on details John I i think is not only a.m. more features in in one boss tolls but the fact that people can trust what they're saying in their already so we we have our site shows that we still have three hundred thousand links from one site from our old site on you well I'm been there for two months is not a single link going back so said just reporting stuff but isn't true present right is as frustrating as not being able to gary paulsen other stuff as well so specifically about the link reports there well I'm think the canteen is looking at and trying to find ways to improve their what is it that they were fresh speeder those records since this specifically so something that I know they're working on other or say you'd say a no I mean don't look at that one most because of the situation because at the old a mooch mailed to the news we look at that while most but I guess related question is does Google is Google working on the information that is giving off so it's a maybe we see 300,000 links here so that's what we consider your site to have and therefore we cannot in our case publishes punish you accordingly or is actually working on clean know what I'm far is accurate and therefore what's needed a different choice was gay and reported buses what you have brawl in its just a delight in the account are you working with what you're telling us are you working on something else and you're just not telling us what's up to date as I know there on 31 2008 I'm for the most part we collect is that it more classical separately so a it's not something more that we'd say this is 12 on what's going into our other loans I with regards to links its I guess specifically tricky because it works it was the show you information about links are not hollow or links that are passing any a drink where we think that maybe this is useful information for webmaster regardless because martz everyone focuses on just a drink or island but of course internally we do kinda remove stains from marlin craft that don't speed record is about I'll those I don't think so that's something where I think specifically without settling reports I they can understand that that feeling you know is this something I have to clean up or is this just Google join me information I don't really need for this specific problem but is also not bad there are any links is not a single like yeah I mean died in the crash site was 3014 three or four months so it every single page would have effectively become a link but it hasn't been for months now and they still showing us 121 why it is about 1000 to 1300 out Mike's from every part and then just not there now at sounds like we should be updating the harasser yeah you jim i'm having a look at their final okay klutzy no I'm thanks in John on the subject to those links and webmaster tools I'm we've got a trip planned trucker dukh want to combine Spencer and the sites don't linked to each other in any way up from using the I'm did the actual a trip and i'm mark up %uh be dot com showed used in links to the site hundred thousands upon thousands twenty thirty thousand pages about site linking to dot com on is that some think not is supposed to be there to wouldn't sound right now so it's just a term I am there really no links on these pages that we know sites that shouldn't be. listen as a link because a jerk playing us like that the link element inherent not really something that s a drink so to certain time I'm writing or maybe you could send me a screenshot Mike and check with the committee what sir no thanks from sure all right let's run through some are these questions that were submitted and analog things up for discussion here I it removes on your own needs. our website what's after a 301 redirect a generic pager for and and I it's a good 30 the or Murray director-general if you have something that's the equivalent so he removed maybe blue shoes from one brand and you have to choose from ever had left over then might be worth redirecting if you have with your move like the blue shoes that you don't have any issues at all on the website you are then redirect is kinda is what also happens there with a redirect like at is we don't read that as a sophomore or so internally will treat it like a 404 anyway it's not something where your CV anything by doing that redirect hey your where is a gr8 reading it as a 404 so returning a clear for a floor is only that the easiest answer and we have two hundred percent identical website I'm to domains for business reasons one website is ranked number one for all cranky words and the other is either and tell him the and his rankings fifty-plus or brand a hundred percent to it happened is reason for healthy that shouldn't be the case if it's sad isnt just a copy of your other web sites and their talking like that for business reasons and that wouldn't be a reason to manually get more up I on the other hand has exactly the same West I then maybe make sense to use R&R on just pick one version show that one search problem so that's something more that you might consider out with regards to do that then home and manual action that's something we're I almost be cautious when you see something like that arm and you're going to get your website across to different websites domains because that could mean that maybe your content itself is actually problematic in that something you'd want to take care so especially if you're republishing hard and firm she needs if you're rewriting news articles from other sites add these are the kind of things where the actual content is kinda problematic not the case at not the room the fact that you have this copied onto two domains that it's actually the on itself that's problematic sunset i'm just saying well its effects have my or where my domains which I don't really care or I can just leave it alone I didn't really think about what you're actually publishing on the web sites and make sure that that's actually really get pumped and not just something that's pulled together from other sites and when using the PHP inside send tool Google suggest increasing the use of internal external javascript what makes sense from SEO perspective to use to attack manager to run the scripts when needed at this rate is is this true also one which will then low rest I'm tiny don't know the details on how who had manager handles isn't I'm laid low see scripts I set is this is and thats actually be displayed on your pages then just putting them into Google tag manager and having Google Product Manager lo those pics probably doesn't significantly change the time it takes to render your pages in browser so that's something where I don't know that really helps and Avalon and from is your point of view that's not something you really need to worry about me so we do take Yahoo page load time injured how different things but it's something where we differentiate between sites that loaded in normal way it's a really really slow where you think looking at maybe several minutes actually display the content web page and doctor and a differentiation is not something that you'd be able to text you by shot when this gets around with regards to loading on you're there so I hate kind of take a look and see if this really affects your bottom-line renderings the these pages and if it doesn't significantly affect and then it's not worth having more complexity like Anna from is your point of view it probably doesn't have any impact at all sometimes trying to access gives a 404 error why would this happen at I sometimes happens for tactical reasons so it's not something where we'd say you need to she something on your website we essentially just don't have the cash worship the stage at the moment and that sometimes comes and goes for technical reasons our site that's not something you who should really see a problem or something it that you need to fix even after implementation so-so social search scheme and breadcrumbs navigation parameters as search console shows navigation attributes missing out what we do in this case I E don't know this is something or I think you probably be best are starting a thread in our health forms or in any other way as a community in getting input from other people how are you Hartley me too litton to your else so that people can take a look at what you actually see what you have on the web pages sounds like maybe there is something missing in this the middle or are these pages maybe you're just looking at something that's not so critical for your website such and download an image from an article to use in one of mine do I have to edit image to avoid duplicate I'll or can i just uploaded is I'm I think the first then you have to watch out for is to make sure you have permission to use the image on your website at that's not something we worry about from an SEO are you that's as a webmaster you can't need to make sure you're doing things right there and with regards to duplicate content I we do try to recognize these kinda reuse the images for image search it's not something I'd say would cost the significant problems that you just uses like like it is what will happen is an image search will try to arrange it based on the content on the page as well as a each so if this scene images use on other sites are other pages that you in your website or within other web sites then we'll try to rank those pages based on the comp on those agents hey in the image search so this is something where specifically dad and extreme edge case where photographers website has the really nice photograph aren't and as a are they just had like that ID that comes out of the camera's a name and that's something that's really hard to read whereas if someone were to take that image and put it on Form bread or are locos certainly we have a lot of complex to this image and that makes it a lot easier for us to read those pages so we do recognize a duplicate and sometimes we show one or the other we can't guarantee that will show the photographer's website all the time because there's no real context there and same time we can't guarantee that will always show that other car ozell through it without it was added because maybe there's something batch people want to find out the tarp has lapsed as I put that hard he worked in multiple places on white on white background and set a target keywords count higher to avoid another let's not pretend actual article that analyzer does it help with anything so essentially you're talking about impacts was which is against our Webmaster Guidelines and keyword stuffing which is also concerned with Sacred Heart so these are things I really recommend not doing hand we have other laws that try to recognize both the situation is automatically and take that into conquering king so chances are it's not going to help your site at all anyway and you're just place your time trying to do these almost old school has your tricks that used or maybe ten fifteen twenty years ago don't really work lol anymore surgeons have kind of a blizzard that people try to use your studying and try reacted have when implementing this is down to an existing website may we use a change in after school and webmaster tools a signal issue 2 bth yes position and no you don't need to do that so the change artist who is this if it were useful here moving from one host to another host so different name its not useful or moving from one protocol to another within the same host so you don't need to use a that change actors tool for that you just set up the redress make sure that week and crawl aged yes or is necessary because smaller at that you don't have things are I content that's better than using HTTP on Heps age and we can pick it up and actually use at research there's no need to do anything fancy in such console what hospitals this article doesn't help article have to match were where the meta description null you can do session whatever you want their we do show that I O to some extent insertion and decisions almost we show description at what we can do the search results and discussion about it I as absolutely no effect on ranking so that something you can put whatever you want in there and see how that works for use we recover from 90 help people remember an opinion with my Chinese places but no rating change do you tell me as my site is currently affected by either in months know it too work on I'm I hate prolly have to take a look at this separately so you want to maybe star read more help for or in your Google Plus and take a quick look I don't know there's something specific the I can tell you now a lot of times we just have so many factors over and over all with our ranking decisions maybe they're just things are also called you so instead of just focusing on the stand in parts make sure you're also providing a lot of additional value on your site so for example also if you're removing a lot lens make sure you're providing something on your website actually generates more links people want to link to people want to refer to you so dodges remove stains make sure you're also and building things up for the future we have an online tool hire business and we currently have location-specific landing pages for our most popular a great lover it just looks similar although the comp and and actors def don't you think this structure can be seen this Tammy yes said could be easiest and he could look like doorway eight is so if you're just popping out city names and the rest so that happen is essentially the same of you re using happened for Wikipedia for all the city names and I'll looks pretty Stanley just look like you're says it is targeting these keywords and not really providing a dingo while you're high quality for users searching for aren't so this is something where it might make sense and concentrate your efforts aren't your agents and I i just make sure that those pages are really high quality really provide something of value to users rather than just collection of the name servers for art and category names slotted at is there a chance that cycling's become something more classes can however have long so and look at the site thanks I don't see that happening any point in your future least I you can demote Xilinx if you really don't like this I blame the tricky part there's something depends on that period that's actually used to you too can show those pages and search so what might happen is is i think is the motor we won't cure for D's clear is that may be more social for other here is where we think this is really good match so but has that much and a providing your own site links I don't see that happening your future well we do we can get requests about this from time to time so we discuss it with it he II don't have any any news that I can I'll recommended Oct how large can are as assigned be don't have 10,000 your house in our society now or should at least let it was there what's the recommended number your house and RSS perhaps I don't think there is any specifics on his limits for RSS feeds so that's something we can probably need to look at what you can do on your side to serve the in our efficient way so usually when you're using RSS feeds you need update them as you update your content and issue update like really smart are these files every time that you update content so maybe you have 10 pages that are out there today and you have 10,000 euros a year RSS site I'll maybe that's not not really that decision on your side other hand if your server can handle this just fine and just created automatically serve them automatically submits an automatically then may be that simple Oct so it's and about you with regards to our source versus X amount I actually did a blog post about that I think late last year now with a sight to see as I check that out as well they also mentioned there a WEO submitting RSS feeds to really make sure that we can be better you alright on my question yeah wrong is there any difference between the on this matter Rustad or extra I can H header do any of these president's preference for a which contains both and get some looking directors I we treat them essentially the same iron did main difference is that the ex robust package together can be used on on HD out so your pedia yeah talk I'll a text file you can block indexing that with the HTTP header I their is no order of precedence when it comes to these Matt Adams said she would take the most restrictive one used so if you have a no index in one of these and further down the page you haven't index then we'll eat more than so relocate the most restrictive on fund in a case like this all rights mom let me just see if there was anything and higher opponent that we need to handle on yeah I'm could use that in hangouts regarding cling to expand contents on that sounds like sh in longer discussion topic and we just have a few minutes left so let me try did be tougher wanna future hangouts and one more year was responsible search Analytics report lower saving cultures in the future I really look search terms rankings jahrling clips for said there are specific euro matches but currently active are these every time wedges 0 pain and what I do there is I just bought are on so filters are in the year also you can just bookmarked am you can also trade our class or something as simple step on your site and generate the is your alls happened to make a little bit users pick them up so that that might be an idea we're also working on a peon I if you're interested in us I'm send me a note and I can and try to get yourself up for that as well our range so let's go back to questions from you all what else is on your mind junk major cuts with them curation and content sure I'm I'm noticing that seems to be a big buzz word in for the nonce Europe to hear it and what I'm seeing lately is people taking large paragraphs to in text its on someone else's site her sitting on their own site and then adding maybe one para text thats not not then use all that block or something hmm is that something you the beach people should be there in 10 minutes is that right into dude what you've got some and herndon I I mean that there's probably a question copyright there which I can't really touch upon me as I can get you legalize I just something that might be worth worth reviewing it lisa how half from are horny view with this an account that what happens is real crawling indexing content but it song is basically looking for something that's within this blog context then we'll treat those pages being kinda equivalent and we'll try to see which are the age is more relevant ensure that one search so specifically it's like it bigger copy block text we know is copied across multiple sites there and we'll try to use which ones most relevant there it's not something that it's a you can't have an advantageous if you copy this content now you just awesome are counting on these pages yes and with regards to like how much you can copy and at your own content I think that's something that's almost mortgage we you with a content source me your users where they think like is this really high-quality content or is this guy just scraping other sites and saying I saw this information on this website what you think and that might not be seen as such high quality Oct sir do you think then she could run into kinda issues more is it still a condom and be night waited nine Lee but wait so not it on panda a condom I went out to dinner min yeah i i think it could be seen as low quality content and sends her regards really kinda depends on how you handle this across the site if the whole site is essentially just say collection I'll copied content with like a water too long hair on par not something I could ethnicity are quality out with him speaking on is a well this website is just reusing a lot of content from other people's website we probably don't need to focus so much on this because we already have this content indexed elsewhere so that s something why I couldn't see our quality ultra I'm sticking this out if thats like hand as the cigar other quality other items I think thats that's kinda hope and i think thats probably or potentially my big local or grimm's near even if the link back to the sorts and not and stop as well yeah national not me used agencies like maybe the BBC on somebody else might do something mean I run obviously could be as much content I mean it's also something that depends like how it's handled a cross eyed to this is like individual posts and there's a lot of really grade unique content on this website then I think overall that that's probably fine this but on the other hand the whole website just consists content talking from other people sides that and there's no real reason for us actually index content anymore because we already have the original source and this might want to align center on top maybe even a link back doesn't really provide that much additional you I'm sure alright more questions not attend or or their something okay it I'll yeah sorry yeah John um regarding we have over 200 banking bastions um no who a mind-bending instance 2006 well I am a little bit curious in that is I means that it's over if that's more up just now well that's what we've been saying for a long time or use this definitely have something in the neighborhood 200 banking actors that year thinking no when you not be spiritedness arkady could you just as the sleepy thing we have over 400 war 300 I'm it might just be that nobody is really counted them since then were like well we have church at least 200 back then and we've been working on in standing overtime so yeah I anything that's not the case that we have like in and access SL I always like roseanne ranking factors that we can like always go back and look up in CO or row I'm great we're almost a couple up here but we had a whole bunch more at that's that's not the case so this is something I think to a large extent it just goes to show that we do take into account a lot of different things and II you don't always have to focus on same thing so just because one website might be focusing more on links another website might be focusing more on really high-quality on and it's not the case that all these web sites have to the identical they don't have to do the same thing weekend after all these different ranking backers look at a variety of different things and see where we end up and then we compare based on so it's not something where they recycle really small number of things that we only look at and you have to hit the right things otherwise you'd think that doing all the work for nothing so it kinda is not so much dad what matters like the exact number thank you factor is that we have though I know SEO blogs love to look at this and say well 205 ranking factors let me in to rate them all for you %uh I think that's always interesting to look at performer sciences E well people exxon think might be good review battles I'm but it's not the case that say you need to focus on this exact number or it really helps anyone if we say well they're 327 today and next week we're adding two more a a I don't know how that would be really is I'm but was not selected by doctor it out the back number you guys might look at some time and okay so what other two hundred actors going around well went well yeah I anymore when things that I A a something seriously we go to like high schools we talk to kids about surgeon show them how such works like our crawling indexing works and only when we get a ranking we could help them work out what rescue workers they will use and it's really interesting to guess input from people who aren't involved in this your community to cast a while okay I understand what page is that someone who I use if I were a search engine and you get some some really interesting ideas and its really interesting so if you have a chance to do that I really recommend talking with people who attended outside is SEO's year in asking them how they would ring content who you know not be very interesting my thanks now that's not on your Q current up there down I miss you eric says you for having me but I've got to know your was homie warm yeah perhaps open up your question the bar a when possible %uh you mentioned that 0 URL mentioned are this might not owning are those included him over multiple that maker or yes but insert Council okay and the you mentioned you three local results you know a those whether that sorta I'm it I mention you take local results into consideration when or operating position when the horse your so that we're muses all what your the Maw unit block number were I A Joan no I don't think so I have to ask what did he know but I'm pretty sure we don't we do take into account arm images sold in his garage on the search titles that's something we do take into account I don't think we take new singer so we're image resource within the web search so you know what filbert bed or yeah I so he search for something like they're not going to look past and you have like this blog images on top that's something that that we take into account with your ex's or each images coopted the curtain or they're just all much more myra or that and they're counted separately and also with images that tricky part there or confusing part is we count those as websearch results if they're on the web search results even if they're shown as images have whereas hey if it's like in the image search then we come back for the image search results so even if the images are shown in the web search results which show them s websearch results with regard to ranking on those keywords well yeah that kinda makes ok and your the web search yeah so it's like two images within web search but they're we got them has lived in West okay and I left finger I'm are I have a problem with the breadcrumb or one my class with the home age member I home and that wouldn't that mean for all a different pic botha or you think that's spam well or back with a displayName or X there yeah I I think like display:none probably worse there fine I am its started some discussions with that the breadcrumbs and structured at a team so it's not that I have like the absolute answer for you just catch but sad because it started discussions the assume that just means I'll at the moment this is kinda reagan undefined situation which might end up like working this way or that way but hey using like a display not here now that should work well it's still not generating anything up their life amal implementation I'm okay good to know okay didn't okay yeah for more work well and maiden you can bet no investment better where I'm sure sure okay RH have with patchwork undergone my thank you all for joining a it was really interesting outside a good to have you all here again not so good russians and I hope parties are you wonder if you're trying out all set up the next worms which will be in two weeks again and later today probably so you can sign up there after questions there if I miss something or something became clear through our answers so right there with you guys a great weekend and see you again in work future hangouts thanks John a great weekend I meant ok thanks time alright I I'm me that it okay welcome everyone to today's Google Webmaster Central office hours hangouts and my name is john Lynch I know what as a trend analyst here at Google in Switzerland who is people here in an hour like the people watching this video perhaps happen we have any questions that were submitted already at if we have you and new people to hang out have any questions at lookout answer your questions first and that we can get through some rest no panic I have a question john if you don't mind aren't a I'm having a persistent problem with Google Local I've got a business that has it's a doctor he's got his clinic name which is I'm the same as is kinda last name it's included in the I'm business name and there's a personal this thing that comes up for him that's associated with his business and we stripped it out at called Google about nine eight or nine times about the problem and they're saying that the search algorithm is serving up their local listing and the proper business listing with the reviews doesn't get any attention make it so it's got refuses got a virtual tour it's got the logo but we're not able to get that to come up for the business name search is there anything that can be done to fix that I don't really know about Sat Google+ local or local local local business results because thats not really a part of that the normal web search results it's essentially something that bitch comes from the maps I comes from a local team so or have it hey is the exact opposite they said that they can't do anything there because that's a search algorithm and they don't influence Google search and they can move it up the chain at a horse cock I'm kinda stuck I don't know I've been going back and forth I don't know who else to call okay so why I what I do there is maybe start a farm thread where you post the URL at that you're seeing and post a search query maybe a screenshot so that someone else can take a look at that and see what actually happened there and if you can send me the URL the form for it and i couldnt double check and see who who that could be right I can traumatize week and kinda clean that up because some of the stains do take a bit of time to get reprocessed and if its relief from websearch itself so it's not like a local business results and that's something that's like the ranking for other reasons but maybe that's something I can help you that would be great would you mind if Ash is one more quick question Church of art I'm it's something about sorta black hat web spam so what we're seeing is I'm everything from sort of social media links which Dan follow through and linked to Google with the search to underground forums that are asking my coworkers to I'm do searches for specific doctors and businesses in the area and we're seeing them rank very well for those searches that we're finding so just wondering what can be done on a white hat level to combat those kinds of so pic user-generated tactics that trouble you wouldn't be having any effect on search cell that's something where my heart maybe they're just doing things that essentially don't really have any big impact on search you and that's kinda keeping them busy doing something and the ranking regardless but that's sounds like something I I'd love to come to take a look at me look at what the team here so if you can send me the details baby through Google+ directly then I can pass on to your team here to come check out this out that I mean it sounds like fish its essentially doing something that they think maybe helps already problem doesn't it like to double check okay absolutely I'll us and therefore thank you very much can thanks hello younger with evening hey I have to a question about a.m. Britain content am we have to web sites it one with a with a game we call it what side a a about right-hand drive carr et al example in Spain I we have I know their website with breaking content about aged several years ago about car rental in this pain plane that we like to makes both on improve the content a having truly good content is in them but a with the same here lol so you know that kind of things bat a.m. my doubt is if if the we are not going to be right-hand drive car rental we're going to be playing car rental is pain and this all p.m. is going to pollute my my game in for for Google it's a good idea to make them a race you forget the right hand right car rental is paying website what do you recommend I usually recommend come combining things into a release from web sites and shadows kind of spreading think they are out and about losing your content so if there's a way that you can focus on one really strong web sites that's a great thing if you really need to split them because it's a very different market for example you're targeting consumers one website and businesses on another one then that maybe make sense a split but in your case it sounds like its a website that kind of talk it's the same audience so at something I I consider combining into one strong web sites I'm obviously will be some fluctuations as things settle down as we understand what's happening there but hey I think in in the midterm and hatch will probably lead to a stronger single website then you have with the sector sites purse or it might make me question what most about they the theme of that one of them a right hand drive were not going to have that on IIIi dunno it was going to recognize it our new theme thats all the car rental units pain on that was not my question I think that's something that just takes a bit of time for everything to kind of cell so obviously in the beginning of a new set up the redirect your redirect the right hand dryer web site to another site may be set up new pages on their on that website then that's something you'll see fluctuations maybe a couple of weeks maybe a little bit longer but stepdad host cell it's not the case that we see oh well this is a right hand drive content and it doesn't matter at all bad normal our content we can to see this is something it's related to its essentially creating one stronger West so that should be no problem II and the last one sorry for for another 10 problem thank you it in if we improve the content so much that its totally different about the same topic but totally different that the content its now at this time I think it's a good idea it isn't I get your targeting the same audience the same kind of content that's that's perfect isn't any web sites update the content from time to time and it looks completely different maybe do a redesigned 00 that its it normal I don't don't see any problem with that rate thank you very much on church can I ask a follow up on that on children and is that is again Fernando is it gonna be a a dot com defined as a gimmick country-level two main all what are you doing with the new one cuz I seen people searching for a right hand drive in mince pie and won't be in spite someone if that would affect held the contents benefit because if this searching from outside then it's wanted to check what your country able to mines going to be of its gonna be a dot com home yeah I mean I guess theoretically could do either one I don't know what's what you guys are going to set up because if you have to and new presumably people outside the country is searching for Android then it might make sense to still keep you sector if you can target the two web sites different is that would just be my thoughts on well combined because we had the discussion to the two weeks ago about you can set in Webmaster Tools way of talking Bob a country level to mine will specific usually talk to people inside that country said he saying on a .es a lot of the Spanish is you might suddenly lose all those not lose somebody might that money one entrance is on losing those people searching for right-hand drive yeah I think she in in a case like this its also kind of tricky to watch out for geo targeting in general because if you're very providing rental cars in Spain your probably targeting in markets outside of Spain so you wouldn't use like yes Tony for something like this because its it's essentially UPS domain you're saying well i'm talking to the user specifically in Spain and if you're targeting people who are coming to Spain to find a rental car and that would probably be more of a blowout against and country-specific name i John I E you know the EQ ruler call it on cartoons are declining site way always any possibility each and the page labeled ha and humans even let's say it i'm Jack T so the band that plays on this %uh at an end in sight say play corn and peaches & Co I'm woo try to be as granular as possible so if we can we'll try to aim for something under age level but a lot of these broader quality algorithms there they're more on a site sub-domain level or we can say well in general this site's is like this and where it's really hard to say what this individual pages like this at this page is a little bit different and so on that level it's probably more aside or proper level that just add individual its a key change home all rights and there's a question and the chaps and Michael and we think we haven't update rollout is that right you say anything about it it's a start and I i seeee SEO whether to Lowes in Germany or so alright and I E don't know anything specific that's happening I don't think that's in the ever come to update I a that's still waiting for you cell hey if you're a scene changes in search is also a lot of these are just normal changes I would make all the time and some sites either more sometime and see them last so that's something where and specifically with those changes that you mention their ice saw some reports the French I saw some reports in German but I saw a lot of people say well it see anything so it might be just something small streak alright I let's run through the questions that were splendid and and with it will open things up for brought her to an ache now what will be the right way to test at this domain that calm with the hash tag whole so basically with patent quite got called in incur as simple there in robust text so the tricky part here is leave don't support using these kinda for fragments in the in there about six always since the only the part that the path or the query string parameters and anytime you use at pounce on but that has tight so right in a row at six we assume that you have comments file so we can become so if you need to block crawling up specific past like batch which essentially we can't we don't roll like that directly anyway then that's not possible in the robots text so for it me you're out where you have to strike and at the end of it we have to crawl domain euro anyway and then we have to render the JavaScript to come see what shows up or that right-handed ok but we can't like only blocked rolling out that track natural because we still have to call the main factor the main URL as well so if you're using fragments as a part of your all stand that something as adults probably won't be usable with the robots text so you'd have to block the main you're out there if you wanted to do that and another alternative is to use html5 history API so that you have normal your else that you can use you can still use a Javascript based application for a or to use that cash paying set up that we support for you have to escape strike and you're all set you can rewrite things so for both of those cases you can block euros from being crawled robust externally but if you're only using a fragment that's not something where you can and a block crawling with that specific fracture you can only block rolling but the main crawled you're all and and you get there are no index on it when we render it but is that you can't block crawled and we have continuously ever hundreds and Google reports them with a timestamp it only for the day and he format our minutes sec and at is up to now possible light don'ts can't find the right lockhart otherwise ass so I believe and will pass it was weak do only show that the tapes there so it's not something I can really help with their if you post and help form I can try to look that up and see if I can post something for you and with more exact answer but otherwise it will pass it was it's directly just the date that was me I posted the question alright and I'll what we need to do to find out did it hour minute and second I'm if you couldn't post something in the hope for months in middle age and I can I can double check to see if there's something specific I can descend you there ok sam another I mean other ideas you could do is perhaps look at the site power-pitch that up sometimes you see this error proper on a Web site at if it's just really sporadic thats really heart you can have a systematic way of picking that up through acids and in Google search has 0 oh yeah should be saying such console said about my sexual shocked sorry I we have the new analytic spend it only shows 999 you words I need to create monthly reports on the amount of keywords we ran for pass so my question is how long will the older part remain available at week do plan on switching that off at some point I don't know what the specific date as a match and couple of months something around that and train I'll where going to show that one missing key words that you have a thousand at least fairly soon but to get the challenge of the actual key words it's not something that support the UI to mock so I don't know if that will be added later arm or it make sense to sign up for the API and at try it out to see if you can get better with API thinks after company merger in the meesh is a better to keep the consumers separate even though they sell the same services should side you know what to say P years ago ketemu both sites had to say IP range and continue updating both if you think that it's worth while keeping them separate thats obviously something that you can do with just two sites that's not something I'd feel really strongly about either way sometimes it's just a lot easier to have once I to maintain to update and generally that makes outside a little bit stronger in search as well so that's often worthwhile to just come by a combined the site make one stronger site rather than having to say it's every kind of similar but not really equally strong so that's something right generally recommend combining them if you have more than just a handful of sites and your wondering what to do I'd definitely recommend combining them into s kind of a smaller number of sites but with just two sites sometimes it makes sense to keep them separate sometimes it makes sense it combine ha and our website so both losing traffic because some purpose articles are getting video rich snippets can do we'll take a look at the issue and essentially what you need to do there is posting help Mom and you're welcome to simulate your or imprint and then I can take a look at that to see if there's anything specific that you can do on your side or that we need to do on our site because sometimes we pick things up in a way that look like maybe videos at what we call the it's better actually at videos but I i'm happy to take a look at that in sometimes are things that you can just do what your side to prevent that from even being considered hi John did Google allow to pander penguin updates and yeah week we talked about at this PT so essentially the for my point of view at the moment we don't have anything specific that we were able to share with you guys combat a suicide as we do have the and they're all out which should be happening i guess im near future maybe next couple weeks then will have something for you at you know webmasters will soon be able to have any input on which side to school is showing the search results you do have some impact Pat essentially that's just the demotion at the site think so there some that you really don't like that you could tell us about that what's it all sorts or also you know and dad we'll try to take that into account when we do the search results the thing to keep in mind there is at this I think stand on the Prairie and for some period that make sense to show them others maybe not so just because you're noting something doesn't mean it doesn't show up at all psych as a psychic and with regards to providing site links at the moment that's not something that we'd and is the breadcrumb Mark A pinsky mark up a ranking factor for desktop or mobile Islanders will recommend using this and we definitely recommend using this kind of structure Data markup because it that gives us more information about your pages in Harlow they should be shown in search especially the predator market is something that we can show right away had been in place at the URL but it's not a market start a ranking factor it's not the case that if you use a scam special markup your page will certainly be more relevant for these queries the user essentially doesn't really see that mark on those pages so they don't really notice board actually do it but it does us to better understand the pages and if the Euro looks nicer because a broker mark-up if they're does send you a tiny bit more traffic but it's not the case that we'd use as breaking tractor and are not hollow wanna link has no final on the island who DC as sees at as do not follow or I don't trust this linked so essentially the nofollow just tells us not to pass patrons through this link so that could be the case that maybe it's a paid link maybe it's an advertisement that you have on your website and advertisements of %uh it's not something I would say you shouldn't do didn't pass any patron because at a row or maybe it's something that's user generated content you can't really be out for all those links because you don't check them all manually and use you know follows a great use for that so it and SAT a drink it's not the case that we don't follow that link so it might so happened that we actually look at that link maybe there are other links passing at also linking to that URL it's not I can block like a robust text and the tea that second part here if we buy an advertising banner to another website should the link be with no follow or not and like image before if you have advertisements on your website and you're linking to other web sites because the advertisements and putting in ohio there's and what we should be doing home and how do you suggest tracking the new Google Web like to use a transcoder version of a webpage always pushes the same server log be split them out are they all users or mix-up parts and users hi Andy and his job always easy once her yeah so weeee I think where is supporting analytics now or really soon so you'll be able to can extract the normal views directly in an index if you're not using analytic stand that's it little bit tricky Rick we do kinda combine these requests and we have a lot of progress for the same your outlook we just send your server one request for that you're all so you don't really see the server box and these are popping up and they're not readable on several of them are not analytics yet okay so do you have any other kind of analytic Ste service on on the sites or I'll tried next battle nothing's really pulling out all this one big law I mean we can see what girls they're going to but they're not standard you're also the crawlers going to either its you guys are scaring some things we have no idea what they're actually ewing that the person or %ah see how I mean if if they're for looking at a friend had I think with the white light years agent and that's something that's especially in proxy for user that was trying to access that your so it's at least one person tried it might be two or three it might be a lot more that were kinda property this into these so speed things up by not requesting it all the time from your server if you need us to kind of repressive all the time you can do I think that picked no cash better a that's documented their that basically turns this feature and I don't know what what the status is for like party analytics providers or how you could compile that information otherwise how what you can do is kinda send us feedback through the she email address so I think it's in the help pages and at that goes directly to it eames if there's something specific like that analytics provider that you using well you need to have this data or maybe you're saying well it would help to have this information in the HTTP requests kind of like yeah I like what happens with rss feeds then and no that's that's going to cost you could send that we can take a look at awesome I sure and is Google think to consideration the size of a company your site when processing and responses from actual actions both Rep genius and time to check out his were processed very fast however similar sites it for weeks and months and we don't don't really take into account what kind of sites they are who their own by a lot of these aces it's not the case that even the Western team kinda has a scant background information on what's what kind of side it is how big it is who owns a side who has invested in design I mean that's something that was also suggested year so that's not something that we take into account for at Magna latch and processing and sometimes there are situations where when the webs and he looks at it they're like well this is a fantastic job they really did a complete say enough about the issues here maybe they even wrote some snacks tools and a guy fixes issue on a broader scale and those are the situations where they worked every team was a while this is fantastically not all we need to revoke the manual action because it's not relevant and some of the sites when I look into that and because I mean people complain about them so I tried to double check what's actually happening there date they really do a fantastic job cleaning these things so that's not always the case that I'd say wow this is like a well-known site therefore gets processed asserts but it's really the case what you submit we can search books and a lot of the reconsideration requests when I look at them they're really low quality either they're almost like a total i fix my issue but they don't give us any information what action and doesn't become a reconsideration request that tend to go back and forth a couple tarts or that where the Western teams as well let's look at it in a week or so to see what actually change because we don't have any information about what actually happened here so thats anything where it just takes a lot longer to be reprocessed that if you can't and as a reconsideration request that's really at to the point you tell us exactly what you've been doing you give us information showing that you've completely cleaned up this issue then tapped something that really helps us to and its process these things faster we don't have to pass them on to other teams and discuss we don't have to wait to see how things are process with regards websearch we can really kinda look at that same while this is clean this is really well done and we can take that into account would take baths as a sign that we need to and remove this number much so that's something we're each if you have great people submitting them and reconsideration requests who really do a fantastic job of cleaning these things up sometimes you can get these process a little bit faster and of course sometimes we do have a backlog that Kenneth applies across the board for all web sites I'm where maybe that the man elections team has has a lot lot to do on a plate at the moment and the kind that's process somewhat slower I'm if you're seeing things that are really taking a lot longer than maybe a week or two and that's something you can also bring up to us we can double check with the team to see what what's happening here is a spike getting stuck naturally because maybe it fell between the cracks or something like that or is there something that we couldn't do too and help speed things up here and chase now the can be implemented dynamically is as some limits in that system are all it is interpreted is some kind implementation that and nag conduct 2002 not parse correctly the chase all the data and UK and implement chase now the with JavaScript obviously to kind of have a generator on the page dynamically that tricky part there is of course we have to render these pages with JavaScript see that's kind of special markup the Jets all the data essentially there so that's something where if we pick up a page really been playing Wii see the HDMI content right way than that helps us to process it a little bit faster than if we had to actually work there the agen in our browser and do that and other issues that we often see when it comes to rendering pages to kind of pull out the metadata from the ages is at a lot of time to JavaScript or the Berserker responses will be blocked by robots text and in a case like that we obviously catch patrol the JavaScript pick up what it's actually doing at because stock by robots text so you'll see that in what nasa tools I'm and I'm what is a cock block resources feature in Webmaster Tools I'll so thats kinda aggregated wayward yes you that you can test individual pages and webmaster tools using their fashion render feature them to kind of see how they rendered you don't see that JavaScript that's kinda behind those pages so get chased all the market on that you don't see if thats actually generated but day you do see if the JavaScript is process to generate the rest of the content on the page i John on the Hamburg issue regarding factoring Redbox all all did not be called I'm not appear in about a week ago I don't know about it basically in the example that not documented are go well on your PQ basically yeah we review is you know it's short or and not with generating a error or on no not week so we now have with even more form so well on with the business think we'll actually get the that correctly that's something that's you know or have you me the pics meanwhile Ste yesssss and we pass it on I think a couple days ago to them to team let me just I like to watch what's I think search box yeah and let me just double check so acts yeah we find out about about that so I assume that will be fixed in the next both days maybe a week or so something like I think that's just data and the structure data testing tool lets picking that up incorrectly okay but there's no problem but using the long warm apparently in yeah yeah Marcus and while doing deep linking for my apt around 3,000 your eye and supported airs appeared I found your comment as a No stat this kind of status can take quite some time to get pushed are completely is my up going to be indexed I ignore these errors or not so they cure I and ordinaire or ap indexing essentially means that you supplied an after all now within New York your pages on your site map kinda link to that app page but that after out doesn't actually work so short though specific cases that as are like that happen listeners errors and search Council I'll we Alden next time tell you were able to counter reconfirmed at this link is actually correct so that that link you from your web page to the in that link from the app back to your webpage is essentially matching and the cockpit matches that we can take that into account for happen next so if you fix that problem in the meantime and then this is something that'll just up your time so as we we check those pages that will be updated automatically it's not something that you need to it manually clean up for kinda flush out or reset or anything like that at so that'll be picked up automatically over time and as we recall those pages I'm policy crawling app page is a little bits or than web pages so it's not quite as fast as if you just ass watermen attack on a web page and everything work sick so that takes a bit of time depending on the size of your website the size of your app how many pages you have there the rest of your app should be indexed normally so its really just those pages specifically where we picked up this error are the ones that we wouldn't show for apec's and in a case like this week still show the apt at the webpage anyway the search results it's not that your comment will disappear from searches just that we wouldn't swap it out against after our time social media links I think I you mentioned this before so again if you if you have some more information on that feel free to send that to me and how do I get my site to appear and search results are plotted against H vicuna at center all appear but the home page doesn't it doesn't have a adult content on it extracts admitting it shear and I aid need to know the your health so that's something right recommend posting the help forum so that other people can take a look at that and escalated to us if there's something you really weird that Google needs to take a look at their sometimes what we've seen I really don't know what will which what kind of site you have but sometimes what we've seen is that site say they don't have any and all content and it should be further safe search but when you look at the website its it's still very racy China adult is content where we'd say well for manual interview we ought to treat this in any way special I'll tweet letter adults on to classify run over that nothing it's just like this I does not something that we want to show in safe search results and that'll be and filtered out so if it's kinda like borderline current and and the homepage is okay but I'm lower-level pages are really adult patient at something where we might use a while we kept another hand if it's a completely normal website as just being sheltered I safe search for some completely unknown reason then that's something we we definitely want to take a look at it and see what actually is is happening tear so to catch you up is this something we need to manually tehehe take a look at is this something we need to work with other than jeans to have a safe search algorithm its handle these cases better and but there we'd find out more you have a forum thread where we can take a look or others and and help you out on the details to go and how to cope with then contact or in little or no added value healthy we have website set up where we have two identical web sites on two domains only because one country has blocked the ritual domain but they'll order new domain II country target this second arraigned so in general all from a the websearch web standpoint if you if you have two domains that they have the same content we what track that is then content as a something worth devoting in search so just because you have a copy of the same content on a second domain wouldn't be a reason for SSA this has no value because sometimes our technical reasons for you to kind of duplicate things across not web sites sometimes there are children are getting reasons so maybe you have the same content on your YouTube page as you have on your US pager on the Australian page and that's that's perfectly fine that's not something where we'd say just because it's duplicated its kinda that in content little value but if you do have this plant is a manual action then that means the content itself is actually kinda problematic it's not the problematic that you have duplicated in two places but it's just that the continent so maybe you're taking for example %uh and Amazon feed and your jets republishing that content on your site then that's something we're we'd say well there's really no use in indexing the spotted separately or in drinking it separately because just happened is actually on others web sites already there's nothing additional nothing a additional value that you're providing so those are and in situations where you can it be to look at content to say well is this really high quality content that dad people should find and search like this or is essentially the same as lots of other web sites are already out there so in in a case like this we have two domains you have a manual action like that I wouldn't assume that it's just a technical issue of having two domains its really probably something more focus on the actual content on your publishing their and now what should work better for local range in four different web sites per location of my business or one site with a page per city right now we have short remains I am I I think each wet when you go past handful of web-sites you're targeting said she just turned cities at quickly goes into and kinda bad doorway page scenario where you essentially have the same content you're just reading the city so that's something we kinda want to watch out for what you're actually doing there if you have under the hash for really distinct business entities and you have a very different against and maybe this is I'm really special hotel on one city once that's really special hotel in a completely different city and it's not that they're at their overlapping anyway then maybe it makes sense to keep separate web sites or to keep it on one website in separate pages have from my point of view I'm wouldn't have any strong preference their at from personal our interview I prefer to keep them on one website big just because it's lot easier to maintain something that's on one web sites tend to have like four separate website so or and at the same thing but 800 watch out that you don't enter direction have saying well my business is valid for all of that All England there for all to read a separate page for we see that's out there because that's definitely dork doorway has scenario where your essentially dis popping out key words on a page to try to make it right for individual birds and thats something that would go cancer Webmaster Guidelines with for web sites four different locations I think thats thats generation and doesn't he I was calm product has to contain 304 description in order for Google to find it interesting for the users and no you can provide I it is better description as you want we don't really count the words on these pages and say well this is 299 words description therefore its low quality content that you really have to make sure that the company provide is kinda on quality regardless of the work and sometimes that means get rich description sometimes it means you're really long description it's not something where we look at work out and say this is good how do I get a cheap GPS to rate of issue 2p I have a number of different link specified in mice search console but I can't see means is that preference I in general in a case like this you said a pre directs from the HDP person to https and maybe even sadder HST s to really make sure that people only access your Heps version and I'll get picked up from from search automatically you don't need to set any preference in search Council back and side lose nearly all those SEO visibility overnight just by setting up a new website only difference being that it's now that mom ducked up version fortaleza double-talk or what Google on not task because we have seen a competitor dropped like that arm got sent home loans and related so one thing to keep in mind here is if you have own website and you move from ducked up to not get picked up in search household will be herself to me so if you just looked it up but the operation and you redirect is a non-profit corporation then it'll look like and everything drop further dot dot dot version because that's no longer the one that's actually shown in search so that can happen fairly quickly happen if you verify both of those versions in search counseled and essentially you'll have that full set of data and you won't see that drop but just by moving from one domain to another shouldn't be resented trial researchers asked home and haha okay this is I think they'll local business topic we talk about briefly and you just cool lows not and Letty in relation to the core quality update take a phantom this is sad that data in real time from the caller does it require money pusher brash and I ate don't know how how we can say this but essentially utmost overall Williams do try to pick things up in real time when we can crawl and index is pages will process them and take the signals into account right away and but of course bigger algorithm updates if you make bigger changes on the algorithms that's kinda processes data in real time then of course you'll see batch bigger change happen we actually push that room update style its especially that the algorithm works more less in real time has it processes the data that that to change the way that we do process the data we do cannot push it out with my apt so in a sense you could say the scanned a combination of both and rendering and crawling I what about C span across the page houses crowing different than the cash taxers and is only visible content that you see in the text in a cash or the source go to the page I including JavaScript yes s cetera that cached version i think is based on the HTML page that we pick so it's not based on the render version it's essentially just the HTML file that we picked up through prague so this you have for example a job most based website what you often see is that the cached version is actually can't this JavaScript framework that's just the HTML bats that was found or what role in the agent not the actual content and how long does it take for Google to index and e-commerce blog post and pay each after publishing at dispatch as Google or data highlight or make it easier to index and if you use such as Google and submit to indexing we to try to pick it up as quickly as possible it doesn't really matter what kind of content that is usual will try to pick it up within a day or so that if you publish new content on your website and you let us know about how to recycle pile or see you there something like that that will try to pick it up just as quickly so it's not the case that you need to use such as Google at two to submit things to index the data highlighter doesn't submit anything to indexing it's essentially just marking up ages are already text what service best disavow tool you would suggest John hands I do not use a text editor and iPhone but I 18 I know there is some really good tools out there that make it a lot easier to kind of process this data to pull it all out this tattoo kind of contact the sites as well as so I I don't know which which ones I would recommend I happened you some also I'm sure there there's some really need tools out there I'll why I am NOT mobile-friendly site said and still ranking on top I with a special part 2 and no two AppleTalk and so the mobile-friendly update does give more I'll friendly pages a slight boost in the mobile search results or someone is searching on smartphones will try to kinda boost that the mobile-friendly pages that we find in search thanks that doesn't mean that everything else disappears from the search results that doesn't mean that something that's irrelevant but mobile-friendly always banks number one if we find something that's really relevant and it's not mobile friendly that he'll still ranked number one so that's something to kind of keep in mind it's not that it turns everything into filter everything that's not mobile friendly disappears from I think that the tap its heart is some kinda Android app for general smartphone app that works well for storm sites and I i if you set up your website to work together with an app and use app indexing setup then that does help us understand that the these pages are actually mobile-friendly for people who do that out that app installed and we can show a link to the up in the search results as well if you have what happens home and follow-up question and this site is changing your al toon on doubled up but recall 301 redirect from doctor to doctor manned up therefore they lost a lot of their link equity therefore doesn't make sense that they were dropped that much because of that now not not really so we kind of me trying to make our algorithms so robust that even if the white matter does something wrong was still trying to figure hot so if you have the same content that the non don't got out you don't have a redirect setup that it's not the case that you would say well this is a really bad work now so will the motorman search remove him from search completely we're trying to figure out which of these versions you happily 17 X like that and just use and if you set up a redirect you can obviously tell us directly well this version index will be able to follow that but she don't give us any information will still come to try to make an educated guess but it's not the case that way which I dropped the site from from the searchers its and we have many categories on our website summer cross which city is a problem is that for some of those categories plus cities there are no results at all what should we do and listing economical them to main category or maybe no index tab I this is something back is essentially anti search results page so that's something I I put in no index on and maybe even server for off or if it's really a page where he is and well there's absolutely no content for this dis your L for this query so debt again no index maybe even a 40 or 22 kinda really make sure that those pages drop at the sections because it what what we essentially what is actually happens here as we run across these pages we see something that says no researchers ok opted for this query and we think well maybe this is just never a jury trial our side or in the worst-case we index it like that we show to use our new search results and their may be looking for this category of thing close this city name they land on your page and such either tried any content at all and that's really disappointing for users and intern thats that's a kind of feedback we get from back from them where we'd say well maybe we need to take action on this somehow maybe we need to figure this out but this is really something where if you know that you're not providing any content for the euros and only is let us know about 0 and okay we have 10 minutes left maybe I'll just open it up for you guys I'll all broker now but it's not a crime the questions on and want to climb I you or about in the past the one that have that I don't be issue regarding links I so I didn't really expect you ducking well I'm though maybe not be but lately even though we have a lot more your first time keep improving the website we seem to have malls tightenings so the extended want your show more I is the URL this won't let me know and you know we'd gotten their goal well even well bad result that you are huge revenge for been gotten both though I'm I'm thank you iraq did meet be some author you should not I'm not saying that because we've been doing a lot with now I think we've been picked up by your local newspapers and the doing don't really hooked up with yes would be on health and it just goes in the other direction every family thing to do something go and its and hard to stay right is be weird because I know when moving so that means its kinda or be just ignore them member in shouldn't be like it like a danger signal but we'd we do try to show that I thanks for like at the top searches all square we see but for this query who we can tell this this site's oh sorry there's a usually researcher something like that a team of the site and thats more likely the case but we chose I thanks but I don't think that would suppress site links if there's any kind about dangers at mars so we we don't show site thanx for all sites is much higher than for all queries baghdad its I don't think it's a case that week and a half are not sick as secret span class fire that only room cycling's IOM so what happens when and then take brown min no it's not of general crew Werder Bremen we bout while eight to 10 cycling's you just the small on and and there is not rockne in the same day no I I don't know I was being maybe something that this afternoon I don't is that when I'm I'm looking at the cast version opinions I'm getting a weird a javascript error and the but I'm not getting II burton's so I'm not sure there might be a problem about or so the cash version of the aged and show them our accountant in the morrow mclaughlin with Paul with it error with javascript error all okay so I I will double check with their action red in in what that suppose to make sure that you're not generating this kinda rare have when we actually render the page because if if we see like a pop up with javascript error then it might be that BC well a bit as their paycheck we need to kind of drop it natural is visibly in the search results I'm I E imagine you double check that but its especially if you're seeing that in the cash age 8 just really make sure that your not accidentally may be sending something special to Google but where when you check in your browser looks okay but when Google bought rice a crawling under the page there's actually some kinda JavaScript a row maybe even a popup that comes up that blocks to main content or that looks like a server-side air something like that are orchid my is there anything that maybe you can tell me that something late you as me my I -20 up in the past we or so I don't have anything in specific that I can share them so it well rights paint job I'm I got the question cited by the bat and ball in a Q&A and so we just started doing acting clinking I we idea and adapts to your side console I'm basically I'm as soon as you did not just started seeing all about the pic showing up as content mismatch I'm when you do thanks google we get I internal error and tell people all I we video songs and I we r restricted yeltsin Japanese Spitz and Austria so I'm wondering if bess a coalition that their you tryna specs that content from have sought GSA that Martin night well I morning Harry get a Harron I can't be helped licensing restrictions gas so I guess what I the tricky aspects their is for ap stats such as Google is really and kinda limited at the moment in the sense that we capped crawling arbitrary number of pages because it's it's kinda complicated to get get that set up and take those cracked screen shots so that's something where if you're seeing a lot of those areas that's not necessarily a sign that there's something pat on your side it's maybe another SIUE were kinda overloaded there yeah all the all the links work yeah and every single one cuz its that generated him UK work it's definitely an issue with something else in this Mike interrogated are that's going to be think that as well as the GRC tonight he's probably box is causing problems I get beyond that because it seems like County nothing was going to get into the index I am so with content mismatch for for aP's its kinda tricky so I guess for for those of you that don't follow that happen X in stock I am what we require when we do act the planking is that the content on the web page be equivalent to the content on the app page so he opened ap age in your I'm strong shitty the same content such a ride and that there's some tricky aspects there in the sense that some maps have a really nice UI that works really well and out but doesn't tend to work that well on a web page and in a case like that we might have significantly different content on the web page and we have visible on up and that that might trigger of mismatch error that's something where if it's really equivalent contact if it's something where you say well this should be picked up properly the algorithms are just a little bit too basic at the moment that something you can definitely send us some examples on and we can double check see what we can do you have and another really common issue that we see their is that there's some kinda interstitial like login page or that maybe even with Terms of Service page that you first have to click through and those are also the kind of things we'd say well we open this app page but actually all we see is a Terms of Service page interests issue or something and doctor would also be trigger like I had and had missed my chair yeah we're not we're definitely not doing that I mean that way its where music video site CJ the names only from single video ages les pages so you get the same video obviously but you just get video brother getting all the surrounding car so I'm yeah all K what's the best way to send you details and one thing you can do is post how far we have a special category for app indexing in the West Oakland and we cherry pick that up and the posts are sometimes too and so that that's a good place to kind of go I think he's since you mention videos it's probably one of the trickiest in the sense that there's not a lot of textual content and maybe we can't masha text directly because met primary content is a video or image so those are probably tricky situations at the team public's take a look at here but maybe we can that step for you so yeah somewhat sample euros to be great for the greater thanks Joe can I follow up on app slower on topic alright heard from our I'll so we have a nap that are acquired being logged in to use I'll we use the same ol earthquake for you to make sure that all at the next thing works really suggest going about in some other way I'll I believe also I fun fitness classes are coming because I or trying to figure out a model ok what while it yeah first victory works if you can set that up that that's a great multi use for that at that way I guess on plixi further in the act they have to login or complete some former whatever that is by on the first view we can still see the primary content than that's essential or and if we have like a hell overland tradition point to let them understand the Apple my purse and her earth star and sometime means I guess is something you kinda have to check with affection render to see what what actually is pick up almost and but this sometimes it works sometimes we recants like look through that in and actually see the final count or John to you have time to check how sign again quickly I'm I need to head off afterwards last so I mean if you didn't listen to be directly I N double check to see if there's something I can say 0 or obviously post in the forum but I but I think I stand for my alarm will help alright okay on so without just let's let's take a break here and I a setup I think the next thing on our friend anymore should be normal times hands and also the next batch after what so thank you for joining thanks for all the questions and hope to see so you again in the future on our by everyone All right. Welcome everyone to today's-- oh, we have a bit of an echo-- to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analysts here at Google Switzerland. And a part of what I do is talk with webmasters and publishers, people who create great websites, and make sure that their input comes to our engineering teams as well. And accordingly, if there's anything from our engineering side that we need to pass on to you, we'll try to do that, too. So as always, if anyone wants to ask a first question, I don't know, maybe one of the new people or the people who haven't been in the Hangouts so much before. Thomas, is there anything specific on your mind? Well, I have a lot of questions. But I've been watching. And first of all, thank you very much for doing these. I think I've watched every single one that you've done, and Matt Cutts prior to that. But I've been watching from afar. This is my first participating. I've had my website. now for a little over 11 years. I started it when I was in Navy f Command, and it was to find out how we can actually recruit within internet games, which was pretty interesting at the time. But I've kept the site and kept it going. I went to the HTTPS protocol back in August, and I saw an immediate drop in traffic. I was getting tremendous amounts prior to that. And I found out that it was my stupid coding errors in HTTPS trying to redirect everything to we're supposed to go for the HTTP to HTTPS. I think I have everything settled now. A few months back, you mentioned about taking a look at sites and being able to do evaluations on those. I don't know if you ever plan on doing that in the future. But if you can take a look at my site and tell me if I actually have all little wickets where they're supposed to be and be pleasurable, that would be good. So I don't know how to have you do that. One thing you could do is if you have a thread somewhere in the Google community or in the help forum somewhere, if you could send me that link to that thread, I can take a look there and see what we can add from our side there as well. I'll email that to you. OK. Fantastic. Thank you. All right. Hi, John. I was wondering if I could just quickly ask you about, again, this is a query about a particular site and maybe valuable information for other webmasters as well. It's a client called spaseekers.com. We went through a process of disavowing a lot fairly dodgy and manipulative links about six months ago. And I think the client actually prior to that had spent quite a long time tidying up the backlink profile. And we've still not seen any kind of recovery for the homepage in particular, which seems to have a quite granular penalty for spa breaks and spa days. And this is in Google UK by the way. So we're just wondering, we've been waiting for a Penguin data refresh. But I know you've potentially got a penalty server you could look at. I was just wondering, a, whether you could give us a definitive answer, whether it's penalized or not, and then perhaps elaborate a little bit more on is there anything else we need to do, how long could we be looking at another data refresh happening. That's always kind of tricky. I'm happy to take a look afterwards if we have a bit of time left. But it's usually helpful if you have a link to a forum thread or in a Google+ thread somewhere, where I can just take a look and see what other feedback you've getting, and make sure you're kind of on the right track there. Sometimes there is something we can kind of say about how our algorithms are looking at a site. But oftentimes it's just our algorithms essentially ranking a site as they think it would be relevant there. And improving that ranking is more a factor of just making sure that you're taking that site essentially to the next level-- from quality point of view, from a user experience point of view-- really making sure that it's a top site for that niche. And sometimes that's easier, and sometimes that's pretty tricky. But I'm happy to take a look and we can see afterwards if there's anything specific I can add to those threads. Yeah. So thanks, John. I appreciate that. I'll ping you a link. [ECHO] We've got a bit of an echo. All right. More questions from-- If I can just jump in. I've got a similar issue. I've got a website. We don't know yet. It has been penalized in the past. So if people have any manual reviews or something like that, we in working now over six months and just trying to make, everything good or better or the best. So we're doing every single area, user experience, making our site definitely better than technically [INAUDIBLE] for users [INAUDIBLE]. If I compare in our industry, our website is actually on the top. But I've got this feeling that the more we try, the more our rankings have declined. And I was like, what is the problem there. And I'm just run out of ideas. It sounds like for a lot of the site-specific things, it would be really helpful if you have a public thread somewhere where I can jump in and see-- Yes. I just talked to you in a private message in Google+. And if you could have a look at it, that would be great. I mean, if there's a manual action, if there's something manually that was done wrong, you would see-- That would be in the past. But you didn't have any manual actions, or something like. So the only thing I know is that we had some, I think it's SU attacks, and so many dodgy links linked to us. But I don't know if it's really affected our website. We don't know. All right. John. I can take a quick look afterwards at the [INAUDIBLE]. All right. John. Can I ask a question? Sure. In terms of the rating system, like the stars, when are they being eliminated? Are they in the works? The rating system? The rating stars. They should be there. I mean, they're like the rich snippets, right? Are you talking about that? Yeah. The rich snippets. Yeah. Usually when we figure out that this is something relevant for the page, we'll try to show that. At least I don't know of any plans to turn that off completely. Yeah. So from my point of view, that's something that still completely relevant. If you have content that you have user reviews for where you have those reviews online on your site, then hopefully you can mark that up with the review markup. OK. So there's no plans of removing them, yeah? I mean, even if we don't show some types of markup, that's something you don't necessarily need to remove from a website. That doesn't cause any problems. It makes it a little bit easier to recognize what the content is about. And even if we end up not showing, it's still kind of relevant for those pages and might be useful for other search engines. But so it's just the [INAUDIBLE] that are going to be gone. Yeah. The Unicode characters and those kind of special symbols, that's something that we've been looking at where we think it might make more sense to kind of clean that up and just hide those from search results. I see. Thanks. All right. Matt. We have a new visitor. Is there something on your mind, some question that we can help you with? Uh, me? Yes. So I posted this in the Q&A. So we just noticed over the last couple of weeks we've switched everything to HTTPS. I actually asked you about this in a previous office hour about a month ago. Everything was fine for a couple weeks. But now we're seeing something very, very strange, which is we've lost about 80% of videos from our video index. We're a video site, so that's kind of problematic. And Webmaster Tools Search Analytics is saying, we're to like basically 100 or 200 clicks a day from some search and that we're getting like almost nothing happening. But Analytics is showing us that we're getting like two orders of magnitude more traffic from that a day from Google search. So there's something very weird going where something makes it look like we're doing really pretty badly. And the last time that we lost this many videos from the index, we had a catastrophic traffic collapse. Well, we're actually doing better than ever. So we're just not sure where the disconnect between Webmaster Tools Search Analytics and Google Analytics is. So one really common thing that I've noticed a lot of sites do when they move to HTTPS is that they don't look at the HTTPS data directly in-- what is it-- Search Console now, so not Webmaster Tools anymore. So basically, you have to have your HTTPS site verified there as well. And then, you'll see the data for HTTPS. So what you should be seeing is like a drop from the HTTP data and the rise again in the HTTPS data. But you'd have to look at the different sites separately. That makes a lot of sense. So that might make it a little bit easier. I don't know about the videos though. I don't think we show that specifically in Webmaster Tools. We do have information if you have a video sitemap file. We do. Yes. Then, you should see the index counts there for that sitemap file. Do you host the videos yourself? Or do you put them-- They're streamed videos, so we're hosting the HLS playlist files and the [INAUDIBLE] playlist files. And the play pages are our domains. But we use Akamai for this as a CDN to the actual string. And I guess some of that probably also moved to HTTPS, at least the plain pages. But they setup an HTTPS for quite awhile. It's just we moved everything on the site to HTTPS. And we started a gradual migration of sitemaps over to HTTPS. So we're 301ing everything. But we're now also kind of generating new sitemaps with the HTTPS links, whereas the old site might still be generating the HTTP links for now. And the idea is to switch those old ones over one at a time as we sort. So we have quite a large problem with latency. So it can take a week or two for a sitemap to come back into the index if it gets pulled, which was the other question I'd like to ask, if that's all right. So we are a video service, so all our content is official record-label provided content that we have basic parental guidance ratings for those. And then, our assumption is that the reason that it takes so long to get into the video links is that a human being has to assess your video to see whether or not it's explicit or not. And we were wondering is there a process by which people who have that kind of industry rating applied to content already can use that as a way of preventing you having to go that route so that would enable you to skip that. And it would mean that we see a reduction in latency and getting it to the index, which might be good for both the [INAUDIBLE] and for us. I don't know whether that's feasible. I don't know. I'd have to check with the video team on that. I'm not sure completely. I know there's a lot of metadata that you can include in the video sitemap file. But I don't know if the ratings information's in there or if we-- Well, [INAUDIBLE] family-friendly or not, which we're setting based on the record company's explicit, not explicit flags. So we're basically setting all the metadata we have. I think all the metadata can be related to the video. So if there's some stuff raised to [INAUDIBLE], we're not doing it. But everything else we're doing. So the other question obviously is so when you get the various rating systems, making sure that your content is rated in a way that it's compatible with the various rating systems. I don't know for, say, [INAUDIBLE]-- Xbox One video stuff-- there's a very complicated multi-tiered rating system you have to go through. But you guys seem to have family-friendly, not family-friendly. I don't know what the current standards there is with the video search index. So I think one thing that would help here is if you could send me an example URL, something where you're setting this correctly already. So I can check with the videotape to see if we can essentially keep that and just use that directly. And maybe also send me an example with the HTTPS, where you're seeing an HTTPS video disappear or disappear since you've moved to HTTPS, so I can double-check with the video team to see if that's really working the way it should be. I'll send you some links. Thank you. Sure. I know with images, it's always a little bit trickier with HTTPS. It shouldn't be that much of a problem. But if you move domains, then it just takes a lot longer for images to refresh. But I don't know for sure how we handle that with videos. By the way, are you guys changing the Search Console URL path? Because it still says Webmaster Tools. No. We're going to use that to trick everyone. I don't know. We have a big list of things that we can still change. And I think, at some point, we'll probably do a public callout for more things where we're still inconsistent with the naming just to make sure that we have everything covered. It's all over. Lots of work. These are always fun things. All right. Let's go through some of the questions that were submitted. And then, we can get back to questions from you all. If anything comes up in between or towards the end again. "My company signed up as a beta tester the new Webmaster Tools API. We completed the NDA. Where can we find out what's happening next?" So essentially at this point, we're still working on finalizing everything. We're getting people signed up so that they're ready when we're ready to bring that out for testing. But we'll get in touch when things are prepared. I think there's a similar question somewhere else. We can see if we get there later. "Is there any setting or variables that I can use to tell Google that my mobile website I use an alternate desktop tag on my desktop and a canonical on the mobile site. I use a separate subdomain for mobile website." If you set that up correctly, then that's essentially what you need to do. There's nothing special that you need to do in addition to our best practice recommendations here. "How does Google handle linking from a blog post to blog post using keyword anchor tag?" We pick those up as normal links. So it's not something that we would say, this is a blog post therefore the link is more important or less important. Essentially, it's a link. We can follow that. We have our algorithms to figure out how we should treat this specifically. But essentially it's a link like everyone else. "I'm using the G Data Webmaster Tools API to fetch keyword data and nos I can't login." So if you're using the old API to download the CSV files from Webmaster Tools, you need to make sure you're using the right service name. So in our examples, we have I think it's called app ID there, where we can specify an ID that you should use. And this is the one that we're looking for at the moment. And with that one, you can still get into the CSV downloads. And as I mentioned before, we're working on the new API for the search query data. If you're interested in much, send me a quick note with your email address so that I can send you have NDA to get you started on that as soon as we're ready. "Hello, John. Is page rank still used in the Google algorithm? And is this still an important factor? Also, are you able to pass page rank on by using a canonical tag, hreflang, or 301 redirect." So we do still use page rank internally. It's not completely dead. We don't update the toolbar page rank anymore. So if you look at your page rank in the toolbar, then that's not really going to be that relevant. For some aspects of our search algorithms, it's still a relevant factor. It's still something that we use there. So if it were something that we wouldn't use at all or that has a tiny amount of value, then we probably wouldn't be using this anymore. So we try to keep our algorithms a little bit lean. And if there's something that we notice doesn't affect search or doesn't affect search at all anymore, then we'll try to take that out instead of keeping it running. Passing page rank. With the canonical tag, you're kind of forwarding the signals there and you're giving us some information that you'd like to have these pages combined. And we'll try to do that if we could figure out that to really make sense there. Hreflang tag is essentially different, because we want to index these pages separately. It's not that we're forwarding page rank. These are essentially separate pages. And you're just telling us this is German, this is French. And we'll pick out whichever one makes sense. And the 301 redirect, essentially you're forwarding any signal that you have to that page to the next one, so we pass page rank there as well. John, can I ask a follow up question in regards to canonicals and 301s? Sure. We've got two sites in the UK. One is the same as our USA one, lots of general experiences. And we have a second site, which is only extreme stuff-- the skydiving, bungee jumping stuff. And it's in conjunction with a license partner. But we're looking to move all that stuff away from that one to our normal site. So over time, we were going to use either 301s or canonicals between the sites. Is there a saturation point where if you've got, say, 50% of your URLs from one point to another with either a 301 or a canonical Google-- actually, these might essentially be the same thing? Or is it can you do 99% of it and it'll still considered the 1% to bet its own domain? Or is it different in every case? We try to do it on a per URL basis. So if this is something that we can see that a part of the site has moved to a new domain and may be a part of the site stays there, we'll try to keep it like that. It's trickier if like you said like 99% of the site has moved to the new domain. The homepage has moved. The robots text file has moved. The sitemap file has moved. Then, at some point, our algorithms are probably going to say, well, probably the whole site. But if they're really still separate sites, if the home page is still separate, if the robots text file-- The home page will be lost. Then, that's perfectly fine. And I think a rel canonical is a perfect use there, where you essentially have two different markets that you're targeting, some of the products are overlapping. And you just pick one of those two sites essentially that you say this is my primary site for this specific project. And the other site can still list that product, but it has a rel canonical pointing to the primary site. So the user could stay within that other site and see all that content. But for search we would concentrate on the link we specified as the primary version. So that's essentially perfect use case there. Right. OK. Thanks. All right. John, if I can follow up as well on the hreflang tags. Since you say that page rank isn't really forwarded in any way when you use hreflang tags. Is it really worth it to use them when you target English-UK and English-US and you just change the currency from dollars to pounds? Should you have two separate pages for that? Is that too excessive for just changing a symbol basically. It depends. I think it really depends on your website, on your users. If it's something that is really completely the same, then I might just use a rel canonical there, especially if it's the same currency. And that's something where you could, maybe in like Germany and Austria, you could say, well, it's the same currency, same description, everything, I could just use a rel canonical. But especially if it's a different currency or a different price, then what might happen is we pick up the price and show it in the snippet, and then you're searching for your product in the UK and you get the US dollar price in this snippet. And then, that looks kind of weird. Then, the user goes, well, they're selling it to the US. It's probably not for me. So that's kind of the situation where you want to kind of separate these pages and make sure that they're indexed separately, that they really show that they are four different users. But since no page rank is forwarded whatsoever, then you could be ranking very well with the US page in US and very badly with the UK. I mean, other than the metrics that apply domain-wide, for example. Yeah. I mean, theoretically that's possible. What happens with the hreflang tag is we swap it out. So if the US page were to rank in the UK in a higher place, then we would swap that out with the UK page. So that's kind of an advantage there. But of course, you're creating separate pages. And they kind of have to rank out by themselves. And if you really have one really strong page and a lot of really mediocre pages, then maybe it makes sense to just say, I'll focus on my really strong version and the snippet might be wrong, the currency might be wrong with the snippet, but it's worth it because I have a really strong page that's more visible than individual pages. But it really kind of depends on your website. And is geo-targeting the recommended, if you had that example with the English-US and English-UK. Should I just do subfolders and geo-target them instead? Sure. Yeah. Would that help? Or would it be-- I mean, geo-targeting usually does affect ranking in [INAUDIBLE]. I mean, that that would help. But of course, if you have a subdirectory for the UK and a subdirectory for the US, then those are separate pages again. So you can say, this page is geo-targeted for the US, UK and Canada, and this page for a list of other countries. You have to kind of do that separately. I mean, for some sites it makes sense to separate it. For other sites, I think it makes sense to combine things. There's always this extra administrative overhead that's involved when you separate things into separate sites. And I don't know. Sometimes it makes sense. Sometimes it doesn't. Trust your judgment. John, can I ask? Sure. I have a question. For example, in Google Webmaster Tools, we have a website that is linking to our home page with 10,000 links. The website is disavowed for more than a year, but Google never removed it from those links. That's the first question. And the second question is we have done most of the sweep of the website. We cleaned, I hope, everything and we would like to go to HTTPS. Will Google recrawl [INAUDIBLE] website faster if we move to HTTPS? Because that's our last resort. If that doesn't work, we have to change the domain. So first off, if you disavow links, they will still be shown in Webmaster Tools. So it's not that you should expect that they disappear there. So maybe things are essentially just working away that they should be, and it's not something you have to worry about. So I think maybe that's the first thing to know. If you do move to HTTPS, then you would redirect probably from the HTTP to the HTTPS versions. And essentially, all of that would be kind of forwarded to the new versions. It wouldn't affect the recrawling of those external links. So it would recrawl the internal pages, but it wouldn't change how the external pages are recrawled in the process. But would Google recrawl our website? Because we have daily index threat now, I think about 200,000 links-- 200,000 pages we have indexed. But Google is still moving down it's about 80,000 errors left now. It's recrawling everything now, but would it make it faster? You don't know. I don't think so. I think it would take about the same time. So we do try to recognize when a site is moving and crawl it a little bit faster to process that site to move faster. So you might have a small effect there, but I don't think it would be a significant effect. But of course, if you want to move to HTTPS, I don't want to stop you. I think that's a great thing to do. Well, we have to do something. I don't know. You know my story already, so this is the final resort before we move to a new domain. If this doesn't work, then we have to 301 if I'm not wrong. I mean, if you removed those pages already, then I think you could keep it like that. You could move to HTTPS if you want. I think they're kind of equivalent. I don't think it's something where you need to force those pages to disappear from the internet. They'll drop out automatically when we recrawl them. And that's essentially fine. It's easy when you have 10 pages. But when you have half a million pages, then it takes for Google like a year to do it. That's true. What you can also do is use a second file to let us know that these pages changed. Yeah, we do that. We can do a change done. OK, good. All done. Perfect. John, Can I ask a question before I go? OK. Is that a "yes," or just like, OK, fine? OK. Fine. About two weeks ago, you said that you're working on making Penguin and Panda run a little bit faster. Could you describe what that means by run a bit faster? I don't have anything specific that I can share about that though. So run a bit faster is about as much as you can get. Penguins don't run too fast. Well, I don't know. You could put them in your car. I don't know. Don't read too much into these analogies. And one last shot. Could you go ahead and show us that Google penalty server? No. No? OK. Had to ask. Thanks. I mean, we keep our soccer games to ourselves. And we kind of keep track of what's happening there. [INAUDIBLE]. All right. Well, it would be cool if there's a way to submit a form, because people do ask you all these things. Am I penalized? Do I have a Panda penalty? Do I have a Penguin penalty? And you do sometimes answer people. Wouldn't it be cool to have some kind of forum in Google Webmaster Tools, where people could submit it and maybe get a response? Yeah. I don't know. The "maybe" part kind of bothers me there. I think if we have something like this, we should be fair towards the webmasters and provide that to everyone if we can. To a large extent, I think the difficulty is just that the information that we have on the way our algorithms are working is really, really hard to map to something specific on the webmaster side. So our algorithms are created to kind of adjust the ranking. They're not created to give specific feedback to webmasters. So while sometimes I can look at these internal things and say, well, it looks like you have this problem or it looks like you don't have any problem at all, that's almost I guess an exception, when I can look at a site and kind of see, oh, this is obviously like this or this is obviously something I can tell people. Most of the time, is really just our algorithms are doing their thing, and there's nothing specific I can really point out to you. Obviously, our algorithms think this part here is good, or we've seen lots of good signals here, lots of bad signals here. And we kind of have to even it out and say, well, in ranking that means this for these specific communities or locations or whatever. But that's really, really hard to kind of point back and say, well, as a webmaster, you should mention this word twice more on your page, and then you'll rank one point higher. That's not really the way our algorithms work. But I totally appreciate that the constant pressure to get more information out there for webmasters. OK. Thank you, Have a good day. Sure. Thanks. All right. Let's see. "Is there a long term solution to eliminate a referral span? The [INAUDIBLE] option doesn't seem effective, and the number of ghosts refers keeps increasing." So I think this is related to Analytics, where there are all kinds of crazy things happening with the refers. I know the team is working on something there to make that a little bit easier. But I don't have anything specific to share there. I don't know what they've announced so far, what they're posting the forums. I think there's some options in Analytics that you can use to kind of clean this up yourself. But I imagine it makes sense to have a more general solution on our side for some of these things. "Testing a site before indexing. I want to check it in Google Search Console, but I can't because the site has a robots text disallow, so I can't use Fetch as Google." Because it obviously says it's blocked. So that's something that's always kind of tricky. So we recommend using robots text, especially for testing sites, dev sites, so that they don't actually get indexed themselves. What you can do for testing, if there's something specific you want to test, is to kind of remove the robots text file for a bit and to use an X-Robots-Tag HTTP header, which you can add with your htaccess file, for example, if you're an Apache server that essentially tells us not to index any of the content there. So we'd be able to crawl this briefly. We'd see the X-Robots-Tag, no index there, so we wouldn't any other content there. And that way you can kind of test it out there. But this is something where obviously the site will be available for the general public if someone were to find the URL and kind of try it out or share it on Twitter even. So it's something you kind of have to be cautious with. And it's also something that internally at Google we don't have any secret tools to handle this any better. So if a team on our side says, oh, I'm watching this fantastic new feature to catalog-- I don't know-- all the key presses that we found on the internet where people have taken screenshots of keyboards, there might be millions of pages there. But they can't actually test it until they actually make it life. So you're kind of stuck behind the robots text log I guess at that point. "Does Google consider Trustpilot reviews as a brand signal? What's the best way to implement it-- linking to the Trustpilot page or creating a static testimonial page on your website and copying the reviews?" I don't know how we would use that. So I imagine this is for Google+ local for the local reviews. I don't know how we would use that there actually. So I don't have any information about that. When it comes to the rich snippet side of things, we do want reviews markup that are specific to the page, that are specific to the product that you're selling there. So if it's something where you have a review about your website in general or about your company in general, then that's not really what we'd want to see for rich snippets. So we'd really want to see something specific to the primary content of that page. "I've looked into Webmaster Tools and was much surprised to see the page removed URLs with some hundreds of link removal request with the status of expired. Less than 1% has a status removed. This looks like we've done all this work for nothing. Why are you doing this Google?" So from our side when it comes to URL removal requests, when we can tell that they've been processed through our indexing system, when we can see that actually this page that you removed from your website has been removed from index anyway, then we'll expire the URL removal request. Because there's no reason to keep that unnaturally essentially in our system. So if we can tell that you don't actually need this request anymore, we'll just expire it. So that's probably what you're seeing there. It's not that you're doing the work for nothing. It's essentially just a sign that our algorithms are pretty fast, and they pick up these changes, too. And you don't need to submit them anymore for those specific URLs. So how long does it take for the URL to disappear? I think the URL removal requests remain in place for 90 days. So usually we expect that within these 90 days, we'll have recrawled and reprocessed that URL. And if it has no index or it has a 404, then we'll remove it from your index. So that's kind of the time frame that we think is relevant there. Sometimes it takes a little bit longer. Usually, it takes a lot less. "Our international website has different regions appearing in SiteLink-- so US pages appearing on French results. Anything we should be doing aside from hreflang that can stop the wrong links from showing?" So hreflang is definitely the first thing that comes to mind there. The other thing that comes to mind is sometimes our algorithms just think this makes sense. So for example, if you have a really strong US site and a somewhat medium-strong French site, sometimes it makes sense that if someone is searching for your brand in France that we show the French pages. But maybe we also show a SiteLink from the US site, because that's something that we've noticed people tend to go to in tend to try out. So sometimes our algorithms try to mix that in. But with hreflang, that's really the primary way that you can tell these pages are equivalent and we should swamp them out. If we don't swamp them out, if we can't swamp them out, because we don't have the hreflang markup, then sometimes we will still show them in SiteLinks there. So that's something where I'd make sure you have the hreflang markup, and if that's in place and you're still seeing this, I'd think about whether or not it actually makes sense to sometimes show it like this. And if you're this, really doesn't make any sense at all, then feel free to send me some examples so that we can share them with the team to figure out how we can improve our algorithms there. "Using hreflang, our global website isn't ranking higher than our country CCTLE but in other our global of the website is ranking higher on brand. We set the code equally on both. What could be the possible reason? Should I use EM on the global site or x-default?" If it's really a global site that's equivalent, well, on a per-page basis to the local sites, then obviously you can use hreflang on these pages. Maybe it makes sense to use x-default there. So essentially with x-default, what happens is if we don't have any match within the hreflang pairs that you provide-- so maybe a German and a French version-- and if someone from the US is searching, then we wouldn't know which one of these to use. If you have an x-default, then we use x-default for all of the locations and languages that you don't explicitly cover with your hreflang markup. One thing that she might be the case here-- it's really hard to say without looking at the example-- is that maybe you're not using the hreflang markup properly. So in particular, we need to make sure that the hreflang markup is on a per-page basis and is the exact URL as we have it indexed for those individual countries. So for example, if you have a subdirectory /fr the French content and the subdirectory /de for the German content, you need to map those pages exactly. You shouldn't use /de/index.html. I mean, it's the same page, but it's a different URL. So you really need to make sure that those URLs map one to one between those pairs and across all the pairs that you have. So if you have a global home page and a local homepage, make sure that the local home page makes it a global version. The exact URL in the global version links back to the local version, so that you kind of have that confirmed from all sides. So my guess would be that maybe you have these URLs setup a little bit incorrectly, that we're actually showing both of these versions because we can't process the hreflang. "Sitewide internal links like imprint, terms and conditions, I found different recommendations from no problem until you must use rel nofollow. What's up here?" So essentially for internal links, we recommend just using normal links. There's no need to use nofollow for internal links. We can crawl those pages directly. We understand that a lot of websites you have something like a disclaimer, or terms and conditions, or about us page that's linked from every page across the website. And that's perfectly fine. That's not something that you artificially need to suppress. I use nofollow more internally, if there are parts of the site where you don't necessarily want us to start crawling. So that could be maybe a calendar section where you say, well, if nothing is explicitly linked, I don't want you to kind of randomly follow my calendar until the year are 2099 or whatever, however far your calendar goes. So those are kind of the places where I'd recommend using nofollow. I wouldn't use it for normal content that you're linking within your website. "We're using the Google Custom Search engine to set to specific sites. What's the best practice for removing dead links?" I don't actually know how the Custom Search engine is setup there. In general, I just recommend using a 404. That's something that we use for essentially all kinds of crawling, where if we see a 404, then we'll try to drop that URL from the search results. There's no need to kind of like redirect it to a higher level page, or redirect it to any other page. A 404 is essentially the strongest signal you can give us to say, well, this page doesn't exist anymore. You can forget about it. "Some of my page URLs which use 301 redirects still I can see the URLs in web search. How long does Google take to figure those 301s out?" It kind of depends. Sometimes, we can figure that out fairly quickly. Sometimes, it takes a long time. The thing to keep in mind here is we'll follow those redirects right away when we crawl, and we'll try to forward the signals right away when we've picked those up. But what might happen is we still index those URLs if you search specifically for those old URLs. So if you have an old site and you're query for the old site, the chances are we'll still show URLs for a really long time, because we think you're explicitly looking for those old URLs. We might know that they all moved to the new site. But you are really looking for those old URLs, so we'll show you kind of what you're trying to find. So that's I think the general situation where people get confused in that they think, oh, Google isn't processing my 301s. But actually we are processing them. If you do a normal search, we'll chosen those normal URLs, those new URLs. But if you explicitly search for the old ones, we'll try to pick them up and show them again, even if we know that they don't actually exist there anymore. I have to ask one question. I have noticed that content from our website-- images that are uploaded-- when somebody uses the embed code similar to what YouTube has, those images, if you go search site.name, I find those images indexed on that website. I'm going to send you in chat. Here is the link for example. All those are my website. But they are indexed on somebody else's. Why is that happening? I'd have to take a look at your specific example. But with images-- I'll send it. I mean, with images, it's always harder, because you have, on the one hand, the image landing page URL that's shown in the browser and, on the other hand, the images that are actually embedded in there. So, for example, it could be completely normal for you to say, well, I have my website on mysite.com, but I have a really slow server, so I'll host my images on-- I don't know-- Picasa or Flickr or somewhere else. So you kind of embed the images from a different site. And that's something where we try to index the landing page anyway, even if the image itself we say, well, this actually belongs to another site. And which landing page we show there kind of depends on the query that you would do. So a canonical example is if you're a photographer, and you created your fantastic photographs, and you put them on your website, and on your website you just have the URL image number on top, like DSC2517.jpg. And we know this is actually the original source for this image. But if someone takes image embedded in an article, for example, and they're writing about their fantastic vacation that they have in this one location, then obviously if someone is looking for vacation in that location in image search, we'll show that, because this page has that content even though the image might actually come from someone else's site. So that's sometimes what you would see in there with image search. I don't know if that's the same as what you're seeing in that specific example. But I'll take a look after the Hangout. But that's something to kind of watch out for. And if you are the original source of these images, if you're a photographer and you're creating this content, I'd just make sure that you really kind of have some context for those images on those pages so that we understand that this is a great landing page, not just because it's the original source, but also it has some context, some information about what people might be searching for that image. We had to introduce the algorithm. We had to create similar something like Google is doing. We have this durability that tracks the title, description tags, and everything inside. And we are also integrating TinEye. It's an API for finding out the source of the content. So everything that is not sourced from our website is going to do no index. That sounds good. All right. Let me run through a bunch of the questions that were submitted. And we'll probably have some time for more questions afterwards. "I have a client that has a local business with a My Business page. Is there a way to completely remove the Google page from the search results?" I think if you remove the Google+ page, you should be able to submit a Google URL removal request for that, but I'm not completely sure. So I'd double-check with the Google My Business team to see if you can actually remove that page from their side so that it drops out of search, too. "Why is it better to submit one sitemap index that has multiple sitemaps rather than multiple sitemaps directly?" It doesn't really matter which way you do it. Sometimes, it's easier from an administrative point of view to say I have URL for my sitemap, and it automatically points to the individual sitemap files. Sometimes, it makes sense to just have these directly listed. It's really up to you. No preference on our side. "What happens if we use page level data index follow and use rel canonical targeting different page? Which page gets indexed?" Well, first off, if you have the default robots meta tags on your page, so index and follow, we essentially ignore those. They have no new information for us, because by default our page is indexed and the links are followed. So we kind of ignore that. If you have a rel canonical on any page, if it has an index follow on it or not, essentially we'll try to follow the rel canonical and pick that up and use that. So you don't need to use robots meta tags. I'd use index and follow-- default for default behaviors. And it has no negative effects. But essentially it's not something that you need to do. "Suppose Googlebot sees 0/0/0 in the URL while crawling a website. Is it possible that might not crawl an index's URL zero just because it suspects a spider trap." No. We would try to recrawl it anyway and see what we find there. So I think there are very, very few patterns that we say, well, we don't necessarily want to call this for a web search. But something that looks like a URL path like this seems completely normal for us. What would be a bit different is if we start seeing it recursively. So if we crawl /0 and you have a link to /00, and then from there is 000, 000, kind of recursively onwards, at some point our systems will just say, well, this website is broken somehow. We're going to stop crawling this specific recurring pattern. "After launch, do Google's core algorithm updates work using real-time data which is updated by crawl or rely on periodic data, inputs, pushes like with certain website algorithms?" We do both. So a lot of our algorithms work in real time. When we crawl something, we'll process it right away. Some of our algorithms work more in batch mode or kind of separated from the real time crawling. So we try to do both. "Any differences between app indexing for iOS and Android? Are they correct? iOS won't work as a ranking signal. iOS won't cause app install buttons in search showing up." I don't know. So we're just starting with some better test for iOS app indexing. So I expect to see some news over time. But I don't think there's anything specific that we have to share around that just yet. "Is a high ratio of broken internal links a signal of low quality?" No. Not necessarily. Sometimes things just get broken. Sometimes a site gets hacked and has a lot of URLs that you see exist that don't work anymore. And all that is essentially normal. We have to live with that. And it's not a sign of the website being lower quality just because there's something technical that's not perfect there. "Everything I found on the hash-bang and URL shortening and push state is one to three years old. So I want to get the newest information." I recently did a Hangout with the AngularJS team maybe three or four weeks ago. So I'd check out that Hangout video. I can link to it from the event entry. And there's a bunch of information there about the current state where we are now. "How to get your website indexed fast?" Really quick way is to use fetch as Google and submit to indexing. That's essentially the fastest way to get your pages in our index. You can also use the sitemap file, which tells us about multiple parts of your site. So those are really fast ways to get indexed. "Saw a massive increase in crawl time and massive drop in pages crawled on April 21 for web pages that have a very slow mobile page load time. Is that due to the algorithm update that happened or more coincidence?" I'd go for more of a coincidence. But essentially if you have a mobile site that's really slow, that seems like something you probably want to improve. Because especially if it's a mobile site, you want to make sure it's as snappy as possible. "We've used single directory to contain our products. We're starting to write it in one directory per product category. Is that easier for Google to understand? What should we watch out for?" I'd set up redirects from the old URLs to the new ones. But essentially it's up to you. That's something where from our point of view, we wouldn't tell you which way you should use that. I'd just make sure that you really have a clean and a strict URL structure so that you don't end up with different parts of the path that are actually irrelevant and that cause more crawling problems than they actually help there. So I would make this kind of a change for Google. You don't need to do that for SEO. If you think it makes sense for your usability or if it's necessary for your CMS, then of course fine. Make this kind of change. But anytime you make URL changes on a website, especially across a board of a website, you're going to see some fluctuation, so some drop in ranking, some drop in visibility in search at least for a certain period of time, which might be a week. It might be up to a month. So this isn't something that you'd want to do on a regular basis. And you wouldn't want to do it just a fluke, because someone said, well, this makes things a lot easier for Google. We can work with both of these variations. All right. We just have five minutes left, so I'll jump to your questions instead. I've got one, John. This is a [INAUDIBLE] issue that I actually ask you I think about a month ago regarding breadcrumbs and rich snippets. So the site was using an icon for the homepage and the breadcrumbs. And the rich snippet testing tool was picking it up fine, but it wasn't showing up in the search results, the rich snippets. none. We are actually still showing to the users the icon but the test is-- let me just give you the link maybe. That could help you. none. And you said, that shouldn't really be there. Google kind of expects it or the text will be visible use as well. And we're still not showing the breadcrumbs rich snippets there. So I've seen that the product prices, for example, also have product markup are showing up fine. So I'm not sure what the issue could be. I can take a look at that with the team afterwards and see if there's something specific I can let you know about. I know I passed that onto the team, but I'm not sure what the result was there. But I can take a look at that specific example again. I'll just follow up for next time again. All right. More questions. What's on your mind? Hi, John. Tell me, with regards to user-generated content, is it considered quality content? For example, if you open up [INAUDIBLE] on a page and you get a thousand comments from different people, is this regarded as good-quality content? Because generally from the way you look at it, it looks pretty spammy because it gets quite long on the page. But we've seen in search that these generally rank higher up. It depends. So just because it's user-generated content doesn't make it spammy or lower quality. I think some sites do a fantastic job with user-generated content. For example, Wikipedia is essentially user-generated content. So it's something where just because it's user-generated content doesn't mean it's lower quality content. But on the other hand, if you let the user-generated content go completely wild, and it's just filled with spam or irrelevant comments, or-- I don't know-- crazy abusive stuff that they sometimes gets posted on the internet, that's something where users will probably look at that and say, well, this looks really kind of cheap and not something I'd want to recommend to other people, where also our algorithms might look at that and say, well, overall this page has some good content here but there's this big chunk of content here that's essentially just cruft that we can kind of drop. So that's something where I'd just watch out and make sure that overall your pages are gaining value through the user-generated content, not that they're losing value through the user-generated content. Is there any specific guideline to what counts for the page, like once it reaches 2,000 words we should cut it off and not add [INAUDIBLE]? No. Word count is totally up to you. Whatever you think makes sense for your site, for your users. That's totally up to you. 750. No. And I think with user-generated content, it's important to keep in mind that when we look at these pages, we think that this is the content that you are publishing. So it's not that our algorithms say, well, this is user-generated content, therefore I don't have to account it for or against the webmaster. You're essentially providing the whole page the way that it is. So you're kind of by publishing this user-generated content saying, this is what I want to stand for and what I want my site to stand for. All right. John, but just a question. Guys, I have to head out. So it's been great talking with you. And I hope I'll see you guys again in one of the next Office Hour Hangouts. Cheers. Thank you, John. Thanks all. Thank you very much. Bye. Thanks, John. OK, welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a web trends analyst here at Google in Switzerland. And part of what I do is talk with webmasters and publishers like you to make sure that information flows in both directions and that we have the appropriate feedback for our teams internally as well. Before we get started with the questions that were submitted, do any of you have anything that's on your mind? Hello, John, my I present you with a question? Sure. So we have a new website from January. And we're doing much effort to get the best ranking. But for some of our major keywords, we don't rank at all. And we think that there may be a type of penalty or something like that. Can you check that please? I can take a quick look. But, essentially, if this is a new website, then these are things that are to be expected. It takes a certain amount of time for everything to settle down and for us to have the appropriate signals that we can use to rank your website for a lot of these terms. So that's something where in the beginning, it's not something that's very, let's say, I don't know how you would say-- very easy to see directly. So it just takes time in the beginning for things to settle down. I am happy to take a quick look, if that helps. But most of the time it's just normal that it takes time for things to settle down and to be picked up properly. Yes, I know that. But I'm asking that because in the past week we've had the cooperation with the company that we think may be used some techniques that are not so good for SEO. So we think that this is bad for us. And maybe if you can check it, it will be good. Yeah, sure. You could put the URL in the comments. And I can take a quick look. Essentially, if they were doing something spammy. And that's something that the webspam team picked up on, then you would see that in Webmaster Tools in the Manual Action section. So that's something you would at least see there. Another thing to keep in mind-- if you're saying this is a new website. And maybe you use an old domain or domain that previously had other content on it, there might be manual actions associated with that. Or there might just be an older history around that domain that's something that you have to work to clean up first. So, for example, if the old domain name was used for a lot of link spam, then that's something you probably want to set up the disavow file to make sure that that's really ignored on our side. Thank you you're much. Sure. Can I jump in with the next question? Sure. Go for it. OK, I'm calling or asking on behalf of walmart.com. Before I joined the team a couple of years ago, we were hit with our search pages being de-indexed. And we were penalized for that. And we've done everything to try to get rid of our internal search queries from Google's index. We have no index or internal search pages. We, up until recently, blocked them with robots.txt. We have our URL parameters in Google Webmaster Tools set to block the search query. But when I take a look at Google Webmasters for our search directory, we still have about 1.2 million pages indexed. We don't see them within Google Search with the site query. But we are getting a lot Google Search traffic coming in. And Google Webmaster Tools shows a fairly high number of pages still indexed. I was wondering what we could do to get these pages out of Google's index. Is this like a subdirectory or a subdomain on your site? I'd say a subdirectory/search. OK, so one thing that you could do if you really were certain that all of these pages were things that you don't want indexed, you can use the urgent URL removal tool, which would also work for a subdirectory. So that's the really heavy duty method of getting it removed. In general, what's important is if you have no-index on these pages that be able to see the no-index. So if the page is blocked by robots.txt and has a no-index on there, then we wouldn't be able to see the no-index. It sounds like you removed the robots.txt and now just have the no-index. Is that correct? Correct. OK, so that would essentially be set up properly. That's something that takes a lot of time to get reprocessed. So if you're talking about millions of pages than I am guessing the more visible ones we'll re-crawl fairly quickly, maybe within a week or a couple of weeks. And the less visible ones-- the long tail of but these millions-- is probably going to take several months, maybe even half a year or even longer to be re-crawled. And then we'd have to receive the no-index and kind of process that. So that's something where you're probably looking at a time frame of maybe half a year-- three quarters of a year, something like that, for these pages to be re-crawled and dropped out naturally. So if that's too long for you-- if you want to speed that up, then the URL removal tool might be an option there for the subdirectory. It depends on how far you want to go there. OK, when you say URL removal tool, you're talking about the remove URLs, not the URL parameters? Yeah, the remove URLs tool. I have-- sorry go ahead. And you can say-- so there are two options there. One is per URL and the other is for a subdirectory. So this is one you would do for a subdirectory because I think there's a limit to 1,000 URLs. So that wouldn't really help you there. Perfect. That's great. I do have a question about the URL parameters tool. When we have a parameter query within the URL parameters tool, does Google still crawl this and then take it out of the index of the query-- pages with the query? Like, for instance, we have facets that we have in the URL parameters tool that we don't want Google to crawl. But we still see Google crawling these facets-- or Googlebot, sorry. Yeah, so essentially what happens is we use that as a strong signal on our side. We try to crawl a lot less than we normally would there. But we still spot check URLs there. So that's something where depending on the number of URLs, the spot checks might still be fairly visible. So that's one aspect there. The other thing is if these pages are already indexed, and you have no-index on them now, then by using the URL parameter tool, you are limiting how much we crawl for those URLs, which means it'll take a little bit longer for us to re-crawl them and drop them out because they have a no-index on them. OK, perfect thank you so much. Sure. All right, more questions? Mihai, I thought you had something. Yeah, I was just waiting for anyone else to see if they have. OK, one of them is actually regarding the new search analytics in Webmaster Tools. I actually gave you this feedback in one of your posts on Google+. It would be really nice if the filter is applied. So you're looking on one of your websites, and you apply some filters to see CTR and position on a certain date. And when you change to a different website, it would be nice that the filters still apply rather than just reset to their default values. That happens in Google Analytics, for example, when you switch from site to site. And you're just looking like organic traffic that carries over to the new selection-- site selection. So that would be nice if it would be in Webmaster Tools. Yeah, we talked about this internally before we decided on the behavior. And one of the problems was that if you set something up that makes a lot of sense for one site, and you change sites, then it quickly looks like there's no data for the other site because it doesn't really match what they were looking for. So if you picked a query, for example, and you selected a different site to look at that data, than maybe that other site doesn't even have that query at all. So that's something where we consciously made that decision. I think there are pros in cons to that. One thing I do when I compare sites is I just change the URL in the URL on top. It's kind of tacky, but it works. Yeah, well at least if the date, for example, or the fact that you're comparing a date to another date would carry over, that is actually what I use it mostly, I guess. Yeah. I don't know if you can do that just for that or you need to do overall. But I-- I'll talk with the team to see-- maybe we can convince them. I've heard the same feedback from other people as well. But, of course, we don't hear from the people who like the current behavior. So it's-- Yeah, yeah, yeah, I guess you're right. One other question-- this is actually a site-specific question. I asked a few Hangouts ago regarding removing the geo-targeting for one of the websites that was already targeted to the United States. And they actually have an international audience. So we set it to unlisted [INAUDIBLE] completely. And we have seen some drops in organic traffic. And I wanted to ask you to take a quick look. When we took the website six months ago, all we found a lot of problems with the backlinks really, really spammy, which did not help at all. But I don't if there's any webspam going on. Or is it just the fact that we did that change in geo-targeting? So if anything pops up-- I don't see anything really obvious. So I am guessing this is just an effect of the geo-targeting changes you made or just normal changes on the website on the web-- so nothing fantastic. Yeah, we've got quite a few categories of things. So I am guessing Google picked some time before processing everything, if there are a lot of changes going on at the same time. Yeah. OK, thanks. I see a question from Tim in the chat as well. "We can contact someone from Google Webmaster Central regarding an interview?" I'd have to take a look at that and see what we can do there. So in general, if you need to contact someone from Google for an interview for a publication or something, the best way to do that is to email press@google.com. And they'll try to route it appropriately. But I can take a look at what you posted there and see what we can do there. We usually have to do all of this through the press team anyway to make sure that they're aware of what's happening. So we'll see. I can't guarantee you anything. Hi there, John. Hi. Good morning, everyone. So on that note, I'm just going to piggyback. So what is going on with this mobile update? How come there's been such a blip, not the impact that was expected? Is this the right forum to ask that question? Sure What do you mean such a blip? Well, this mobilegeddon is no mobilegeddon. It doesn't have the impact, or is it that it's still rolling out? And so we still haven't really seen the impact? When I'm talking impact, I'm talking numbers. We haven't seen much of a variance. We've seen a variance on the 22nd and the 23rd-- but not as big as the impact was expected to be. OK, so you would have liked it to be more visible? I'm wondering why it isn't. Or will it be? What's your thoughts? So it had rolled out completely. So this is essentially the current state that it's in. I think one of the difficulties here is that it's a very broad change. So while it has a fairly big impact across all of the search results, it doesn't mean that in every search result you'll see very visible changes. So that's something that affects a lot of different sites-- a lot of different queries. But it's not such that these sites disappear from the search results completely if they are not mobile-friendly. So that's-- I think, on the one hand, that makes a lot of sense for the sites that aren't able to go mobile-friendly just yet-- maybe like small businesses who don't have the time or the money to set up their sites for that. And these are results that are still very relevant in the search results. So we need to keep them in there somehow. The other aspect that we noticed is that a lot of sites really moved forward on going to mobile. So where we expected essentially a little bit of a bigger change because a lot of bigger sites weren't really mobile friendly, a lot of those really big sites did take the time to go mobile-friendly. And with that, they didn't see that much of a big change. Ah, OK, So what was it? I just had one last question here then. What's the most surprising aspect about this update that you're observing? The most surprising aspect-- I don't know. That's hard to say. I was really happy to see a lot of sites jump on and actually go mobile-friendly. I think the mobilegeddon name that was picked there was probably a bit too scary. And it scared a lot of people, maybe a little bit needlessly. But it also encouraged a lot of people to actually take that step and make their sites mobile-friendly. So it's something that had pros and cons there. Do you think the biggest impact will be seen with the small business sector? Or not so much? I mean, they really weren't ready by April 21st. Yeah, we worked a lot with the algorithm to try to make sure that we picked the right sites, the right queries and made sure that the results were still relevant there. So taking the small businesses out that have-- maybe older websites that aren't mobile friendly or that take a lot of work to make mobile-friendly. I don't think that really would have been the right approach there. So it's something where I think we rolled out something that's fairly broad that has a fairly big impact. But it's something where at the moment, we're trying to balance it to make sure that the relevance is still given that we just don't removed everything that's not mobile-friendly. I think over time maybe it'll make sense to dial that up a little bit-- maybe to include other factors as well. There was talk about speed-- maybe interstitials-- those kind of things. And I think over time, this will definitely evolve. But we need to make sure that the relevance of the search results are still given, even if you're on a smartphone, even if the sites are mobile-friendly. Thank you. Sure. All right. Let's grab some questions from the Q&A. "What's the best practice for posting press releases on a company website? We send out press releases via a service then posting the same content to our site. Will this cause duplicate content? Should they posted on the site first then sent out?" So essentially with regards to duplicate content, that's not something I'd worry about. That's something that we generally take care of fairly well on a technical level on our side. So we wouldn't penalize a site for having the same block of text on your website as there is on the press release site. That's something where we understand that sometimes there are technical and practical reasons for that-- that you have these texts duplicated. And we try to de-duplicate them in the search results. So if someone is searching for your company and we know that this press release is on a bunch of press release sites and your website, then we'll try to pick one of those sites, show that in search, and kind of keep it at that. So we'll index the different versions. But we'll try to pick one of them to show in search. With regards to posting at first to your site and then to other sites, I generally think that's up to you. That's something that I don't think would have a big impact on how we pick the sites-- the page to show in search. What is it? That other thing-- the main thing to watch out for with duplicate content is really not that you have some content which is duplicated. Sometimes there are like practical reasons for that-- also not that maybe there are some technical reasons why some content is duplicated, but rather just to make sure that primarily your website really has unique and compelling content of its own. So when we talk about something where we're penalizing site for duplicate content or manually or algorithmically taking action on that, that's really the case where the whole website is essentially just duplicated content. It takes content from various other sites. Maybe it rewrites them. Maybe it spins that content. Then that's something where we'd say, well, it doesn't really make sense to look at this website at all. We can take this out completely in an extreme case. In an other situation like this one with a few press releases that are duplicated or maybe there is a technical reason why you have things on two domains, These are all things where we say, well, that can happen. We have to deal with that on our side. The rest of the website is still really important-- really good content. So we'll still show that in search. And we won't demote a site or we won't penalize a site for having some aspects that are duplicated. "We had product schema for getting stars in search results in the past. However, none of schema shows when we switch to review schema. And it still doesn't work three months later, when even our competitors works with no reviews." So I'm not-- I don't quite understand this question with regards to whether or not this the star showed in the search results previously, and they don't show now. It's not that you basically switched to markup for them. If they showed previously and they don't show up now, then I would assume that maybe something is wrong with the way you implemented the markup there. If the stars never showed up at all, then I would assume that maybe there's something on our side where we're saying, well, we see this markup on these pages. But we can't really trust it. Maybe the rest of the website isn't really high-quality enough, trustworthiness enough that we say, well, we need to take a step back and consider what we show at all from this website with regards to rich snippets. So that's something where I differentiate between those two variations. And maybe there's a mix of that happening as well. But I'd definitely make sure that the markup is really used correctly. Also that you double check in our help center to see that the markup can be used for rich snippets because a lot of the schema.org markup is essentially supported on our site, but we don't use it for rich snippets. So if you want to use rich snippets, make sure you're using the type of markup that actually is supported on our side rich snippets. Hey, John, regarding rich snippets, I actually have an issue a few weeks ago with breadcrumbs. So usually for breadcrumbs, we use the markup for the home page as well-- so the home page, then the category page. And in one of our client's cases, the homepage wasn't -- there was no anchor text. There was just an icon. An itag with a class that shows the home icon thingy. Now we use that. We added the markup of the testing tool showed everything is OK. In Webmaster Tools, we saw there about a week that everything was picked up OK. But it wasn't showing up in the search results. So is that a problem? Should we add-- well, we actually did add a few days ago text with a display-- a non-CSS style. Would that be a problem or is there any other reason that the markup would have shown up even if everywhere was resulting in OK? So this is on the homepage? No, on any page, any category or [INAUDIBLE] page, just the home link was actually an icon that was not anchor text or no actual-- OK like a house symbol or something like that for the home page. OK, I don't know. Good question. The Webmaster Tools and the testing tool show everything is OK. Indeed, there was no anchor text as the title. So the URL showed, but the title-- there was no title-- a row in the testing tool. But there were no errors whatsoever. And in Webmaster Tools it did show them as being picked up and no errors at all. But we didn't see them in the search results. So now we added the brand name also, but display a non-CSS file not ruin the design. So is that a problem? Generally speaking, that should work. I think one thing to watch out for with the breadcrumbs is we recently updated the schema that were breadcrumbs. I don't know if you saw that or which breadcrumb markup are you using? The data vocabulary one, I think. I saw the post regarding the schema.org. Although on the web master help page, it still shows a message that is not supported yet. I wasn't sure if we should-- Oh, we should update that, yes. So basically there-- if you're using the data vocabulary markup, then that should work in any case. I think it might be kind of tricky if you're using displaynone for a text because we do expect to use visible content for rich snippets. One thing that might work is to use an Alt text on the image to let us pick up that text like that. That would be an option. I don't know if you had that? It's not an image. It's a before content. And it's an icon CSS file that uses content in the code. So it's not an actual image file. It might be interesting if you could send me a URL, then I can take a look and pass that onto the team so that they can double check. But in general, that's something we should be picking up. I don't know how quickly we would be showing the breadcrumbs. But if you waited a couple weeks, then that seems like something we should have been able to pick up there. But if you can send me a URL, then I can double check with the team on that. Sure. "A client's site I've been working on has no manual penalties against it. But it's as if it's been delisted. I still see lots of indexed URLs in Webmaster Tools, but nothing in terms of Google traffic. What could be going on?" That's really hard to say without being able to look at the site directly. So that's something where I'd recommend going to the help forums and checking in with other webmasters-- other people who have worked on websites so that they can take a look at your website to see what might be happening there, Checking for manual actions is definitely the first thing I would do there. I'd also check for technical issues-- maybe crawling errors-- maybe no index that's on these pages-- maybe wrong canonicals-- those kind of things. These seem like the normal technical things I would watch out for. But at the same time, I'd also make sure that if you're sure that the technical side is great, that also the content-- the quality of the content is given-- that it's not something where we'd say, well we've crawled an index's content. But it's not really something we want to show to users. Therefore, we're not going to rank it very well. But people in the webmaster forums or in the Google+ communities around webmasters-- they have a lot of experience with these things. And they can probably give you some tips on what you could be looking for. I've noticed some of the largest sites in the world allow Googlebot to index their internal search queries and search results using different tricks, all with no-index tag. This is against Google's guidelines. What action is Google taking on these sites? We do take action on these things. And this something where we try to take action both algorithmically and manually. The part, I guess, to watch out for is that sometimes something that looks like a search results page could actually be a very useful page to users. It could be a category page. It really depends a lot on the content on what is shown there. So that's something where I wouldn't categorically say everything that has a search in the URL is something that should never be shown. But it's something we try to find the right balance in. The big aspect also involved with search pages, especially if we can call them, is that we can easily spend a lot of time crawling these pages. If you can specify any variation of URL parameters there, then we can crawl and index millions and millions of these pages from even just a normal website that isn't that big. And that's a lot of wasted resources on your side and on our side. So that's something where we essentially spend a lot of time looking at the same things over and over again while crawling your pages. But we're not actually pickup up anything valuable-- anything that we can show in search. So this is something where the one hand, the webmaster guidelines try two point of the quality aspect. But there's also a very strong technical aspects involved with crawling and indexing search pages where everyone is essentially wasting time and resources on something that's not really that important. And maybe the important things on your website are being skipped because we're too busy crawling the search pages, which are things that we ultimately might not show that visibly in search. So there's always this dual aspect there that you have to keep in mind. "Title Rewrites recently launched a new product version. And on the day of launch, Google decided to revise the title to a three-year old version. This continues to be an issue with our website-- would love an ability to tell Google not to rewrite those titles." Yeah. I guess this isn't the first time we've heard this. Our algorithms do try to pick up the right things for these titles and to show them. But especially if you're looking at something that recently changed and we're rewriting it to something that doesn't make that much sense, that'd be something I'd really love to have that feedback directly. So if you can send me the URLs and the queries where we're doing this, that would be something that the titles team here would love to take a look at and would love to have as an example of something where something changed on the web. And we didn't reflect that quickly enough in the search results. And it's confusing you. So it's not that we're trying to confuse users. We're trying to simplify things a little bit show and show that appropriately. So if you have an example of a title issue like that, you're always welcome to forward that onto us. "Does priority and frequency matter in a sitemap? If not, how can we tell Googlebot to crawl specific pages on daily and high priority?" Priority and change frequency doesn't really play that much of a role with sitemaps anymore. So this is something where we tried various things. But essentially, if you have a sitemap file and you're using it to tell us about the pages that changed and that were updated, it's much better to specify the timestamp directly so that we can look into our internal systems and say, oh, we haven't crawled since this date, therefore we should crawl again. And just crawling daily doesn't make much sense if the content doesn't change. So that's something where we see a lot of sites. They give us this information in a sitemap file. They say it changes daily or weekly. We look in our database of what we see when we crawl. We say, well, this hasn't changed for months or years. Why are they saying it's daily? Clearly, there is a disconnect here we should probably ignore something there. So what I'd really recommend doing there is just using the timestamp and making sure that the timestamp is always up to date so that we can work with that timestamp and say, well, we crawled on this date. You're seeing a change on the other date. Therefore, we should call crawl again. And another thing to keep in mind there that crawling doesn't mean-- that isn't directly related with ranking. So just because we crawl something more regularly doesn't mean it will rank higher. It can happen that something stays static for years. And we've crawled it once maybe half a year ago. And it might be one of the most relevant results for some of the queries out there. So that's something where crawling and ranking isn't really directly related. So I wouldn't artificially try to make Googlebot crawl more if you don't actually have anything there that we can pick up on for indexing. Hreflang question-- "If Google Webmaster Tools reports one error with the sitemap, no return tags, does it mean that the entire file is ignored? Or would the only issue be related to that one region without return tags?" This is done on a per URL basis. So it's not that we ignore the whole sitemap, but rather we see that for this set of pages that you have specified, if that one return tag isn't there, then we kind of have to disregard that pair of pages. But the rest of the site map is definitely still processed. "Is the doorway pages algorithm now in place? If not, is there a projected date for it to take effect?" This is already launched-- the doorway page update. Also will it run in real time like the mobile-friendly? Or will it require periodic updates like Penguin?" I don't know for sure. But from what I've seen, it's something that does run regularly. It's not like every day. But it is run fairly regularly. Also, since this question suggests that you're still seeing a lot of doorway pages in the search results-- if that's the case, then I'd love to have those examples so that we can pass them on to their team. And we can figure out a way to improve our algorithms there. So if you're still seeing these kind of problems, we should definitely have those examples so that we can take a look with the team. "My site has an annual conference URL page annually that we archive at the end of the year in Google Search our 2014 annual conference URL appears before our 2015 conference URL. Is there any way to reverse this?" This is something that's always tricky because usually what happens is the older years collect a lot more information. We collect a lot of signals around the previous versions of this conference or if you have maybe different product models-- maybe the older versions have a lot of press around them-- have a lot of links-- a lot of signals that we've picked up on. So if you're searching for the product name, we might show the older version first because we just think that this is the more relevant one. One trick that I've seen people do, which essentially works here, is that you have one generic URL that you use for your product or for your conference. So you would have, I don't know, yourconference.com. And on your home page, you just have the current version. And when it comes to creating a new version for the next year, you move the current version to an archive. And you say, OK, this is 2014 to a new URL. And you put the new version on the home page again. So you reuse the main URL over and over again. And then we know that this is actually the most relevant version that we should be showing in search. But there's also this archive version where we have also information somewhere in our archive as well. So that really helps there. John, on that question there, so would it make much difference or would it not be recommended to say move the previous content down-- say we're talking about different content in general where someone has a new article on this specific topic. And it's the same topic. Would it make sense to move that down and have a number of them there from some previous years versus just replacing them? You could do that. I think that's something that's less of a search problem then maybe a design problem or philosophical problem almost like how you want to present your information to the user, if you want to keep the essentially older versions of that information on that same page as well. But essentially, that's kind of up to you. That's similar to how perhaps a blog might do that where you have the main information on the homepage directly-- some of the older information as well. And it rotates out over time. Maybe the older information just has a snippet and a link to the archived article. That's essentially up to you. All right, thanks. Where can you report do-follow comment spam links? Is it the same form as the page links form. Sure, you can use a paid links form. You can also use a general web spam form. Both of those end up going to the webspam team. And they can take a look at that. I imagine with comment spam, it's a little bit tricky because it's hard to tell who added that comment spam. But if we recognize that there's a general pattern that we need to take action on, then that's something that the webspam team might want to know about. The important part there is also that the webspam team can't promise to take action on all the reports that we get there. We get a lot of reports. And we need to prioritize those somehow. So the easier you can give us the bigger picture in the webspam report, the more likely we'll be able to take action on the bigger picture. Whereas, if you just give us one comment URL that has a spammy link in it, that's something that we say, well, it doesn't really make sense to take manual action on this one comment spam link that was dropped there. But if you can give us a bigger picture, that's definitely something we can look at and figure out how we can prioritize that appropriately. 'My site has a warning that it may be hacked. But there's no warning in Webmaster Tools. Also Google's cached version of the home page is very out of date. I've tried to submit via Google Webmaster Tools to advance the process, but no change. The SERP looks very odd. What else can we try?" So sometimes what happens there is that a site might get hacked or defaced by hackers in a way that isn't directly visible to the webmaster. So maybe you open a page, and they're cloaking to Googlebot or they are cloaking to our IP addresses. And that's something that you don't directly see in your browser. So I'd recommend going with Fetch as Google in Webmaster Tools to double check that it's really your normal content and not something where there are hidden pharmaceutical links or any kind of other links on these pages. I'd also double check with other search engines, maybe do a site query for the site, and add a bunch of spammy keywords that you think might be involved there and see if anything from your site pops up there where maybe some search engines have noticed that this there is this spammy, hidden content on your site. If you're sure that everything is clean or if you've noticed something, and you've cleaned it up, there is a form that you can submit which I linked to in a Google+ post a while back. I'll try to dig it up and add the link to the event entry. If I forget or if you can't find it there, feel free to contact me on Google+, and I'll send you a link to that form. That's one that goes essentially to our engineering team to work on that feature to double check what actually is happening here-- why we're picking this site up as still being hacked when maybe it hasn't been hacked or maybe you cleaned it up and we just aren't recognizing that cleaner state yet. So this is something where the algorithm essentially runs on its own. But giving feedback to the engineering team sometimes helps us to make sure that we're getting it right that the algorithm is picking up the right issues. "Webmaster Tools needs a tool to handle keywords not associated with the site. Now that it's hard to see what keywords someone used, it's hard to tell why they bounce many times. It's unrelated keywords I would rather reject and not ruin my bounce rates." That's an interesting feedback. Usually we hear the opposite that people want to specify more keywords for the site to rank. But I can totally understand that maybe it makes sense to understand where things are happening-- why people are going to your site if you're tracking things in analytics and want to have cleaner data there. I think in general this isn't something that is likely to happen in Webmaster Tools just because we do try to pick up the most relevant keywords on our page. And if we have trouble picking up the relevant keywords on a page or ranking the page for irrelevant terms, then that's something that is more of an issue on our search algorithms and something that I'd say the webmaster would need to manually override. So if you're seeing that a site is ranking for irrelevant keywords, you're welcome to pass that on to us to let us know about that. But I will also take this up with the Webmaster Tools team to see if there's something that they might be able to add there. OK, here's a question about the interview. I'll double check that later. "As we all know, Google's main income is from ads. Many SEOs, online marketers, et cetera believe that Google makes SEO more difficult in order to increase revenue from ads. Is that true? If not, can you explain why?" This is definitely not true. So this is something where we have a very, very strong firewall essentially between the paid side of Google and the organic search side. And that's not something that we would connect where we would say we would make algorithms that make the search results worse so that people go and click on ads more because essentially if we make our service worse, then people are going to go to other search engines. There are other search engines out there. And some of them are really good as well. So it's something where we're not artificially trying to make it more complicated or harder or the search results worse so that people click on ads because in the long run, we need to be able to provide really high quality, fantastic search results if we want people to stay with our service. So that's something where on the one hand, we really have the strong separation between the two sites. On the other hand, we really need to keep that upright so that we can make sure our search results are really as neutral as possible as high-quality as possible and really provide what users want. "What actions does Google take on grey hat SEO technique similar to black hat-- no actions at all. What exactly does Google consider grey hat?" I don't know what you're referring to as grey hat. So from that point of view, I don't know how we would be able to react to that. Essentially when it comes to things that are clearly black hat and clearly white hat, that separation is sometimes very easy to do and sometimes it's very hard to understand what exactly is happening here. Sometimes we have to take a look at the bigger picture for these websites and see what is really happening here. So that's something where we take manual action. On the one hand, we take algorithmic action. We try to make our algorithms robust against any kind of abuse. We try to make our algorithms so that they ignore the accidental abuse that happens. So you've probably all seen this where a website follows advice from someone who read something on an SEO blog 10 years ago. And they stuff hundreds of keywords in the meta-description tag. And when we crawl that, we look at that. And we say, well, clearly someone was trying to follow outdated SEO advice. But instead of penalizing site for doing that, we try to see that and say, OK, well, we recognize this. We'll just ignore it completely. Or when it comes to keyword stuffing, that's essentially similar. We will try to recognize that. We'll say, well, clearly they are trying to abuse our servers and trick us into thinking that this page is more relevant. But maybe they're doing accidentally. Maybe they don't really know what they're doing. Maybe they copy and paste something wrong. We'll try to err on the side of caution and just say, well we'll just ignore this keyword stuffing and focus on the rest of the site as it were a normal site-- a proper site-- where they're trying to do the right thing. So we don't assume that everyone is out to spam our systems by default. But rather, we try to figure out which parts of these pages are relevant, which parts are maybe artificially exploded that we can ignore and rank those pages appropriately. "Search analytics can only provide 99 queries per website. Are there any plans to expand it? If not, please take this as a request?" I've heard this from other people as well. So your request is certainly heard. I think there might be some tricks where you could get a little bit more information about the queries. For example, if you drill down to maybe filter for a subdirectory or you filter for specific queries or specific sets of pages, then you can get more specific information there. But at the moment, I don't think we'd show more than 1,000 results in the table below. So that's something we'll definitely take that on and see what we can do to expand that over time. "How important is it for a site to have a last modified HTTP header?" It's hard to say. It depends on the website. So it's not something that you always need that every site needs. But especially if it's something where we crawl a site very frequently, then this helps us to save bandwidth to make it easier for us to crawl your site. So if you have the last modified header, and you respond to those requests appropriately, then essentially we'll just ask has this page been modified since the day when we crawled it. And you say, no. And that's fine for us. Whereas if you don't have this header, we'll ask your server, hey, has this page been modified since the last time crawled it. And your server will say, I don't know. Here's the contents of a full page. And that's a lot of bandwidth. So if a website is constrained on bandwidth-- if you're limited on bandwidth-- if you have a lot of pages that change frequently or that need to be crawled regularly, then obviously this header makes it a little bit more efficient. But it's not something where I'd say these pages will rank higher because of this header. It's similar to the question the beginning where the crawling of these pages isn't really related with the ranking of these pages. So if we can pick up the content appropriately and we can index that content, then we'll try to take that into account for rankings. You don't need to force us to re-crawl that content or you don't need to use this header to optimize the crawling of these pages. Essentially, we have that content. And we can work with that. That said, this is always a good technical best practice. So if you're implementing websites, then that's definitely something to do. I think a lot of the CMS systems out there already implement this by default. So it's not something that you need to manually add yourself. So if you're using WordPress, for example, it does this automatically. There's nothing magical that you need to do on your side to get that implemented. "We want to launch--" Hey, John, can I ask a question? Sure, go for it. You have released a webmaster as a beta version-- a new design where we can figure out the [INAUDIBLE] particular design in [INAUDIBLE]. However, it's more of a request of if you could get the data in terms of once. We usually get to see the data in three months. So if you could increase to that range to over six months because as you know, that search organically-- it takes time. So again, it takes time to build that data to analyze. So three months is a very short time to analyze that sort of data. So is there any possibility that you could do that? That's a good request. We've heard that before. And I know the team is aware of that. But I don't have any plans to announce just yet. So this particular feature is still very new. So I know the team is still working on some of the aspects there. Maybe that's something that they can add at some point. But I don't have anything to announce just yet. Thank you. All right, let me run through the remaining questions very briefly. And then we'll get back to questions from you. We want to launch in English, French, and German. Is it OK to do that on different domains, not only TLEs, but the domain itself? Yes, it's certainly possible. You can do hreflang between those versions as well. It doesn't need to be the same domain name. It can be a different domain name-- a different TLE-- whatever you need for the different language or country versions. That's certainly possible. "Best practice on thin category facet pages-- a few listings that occasionally are empty-- no listings. At the moment, we're using no-index and getting lots of soft 404s. Is that something to worry about?" That's fine. Using no-index on those pages is perfect. We'll treat that as a soft 404 because essentially you are saying, well, there's no content here to index. But that's not something that would cause any problems. So, essentially, we're dropping the page from our search results. And from your point of view, it doesn't really matter if we're dropping it because of a soft 404 because of the no-index. Essentially we're not showing it. And that's what you want to have done. So that's perfect. "I have noticed a big difference between Google and CloudFlare analytics on my site. Sometimes this difference is over 50%." I don't have any information about either of those services. So specifically analytics versus CloudFlare analytics-- I don't know what they're tracking. So you probably want to check in with the analytics forum on that. "Is there a problem I have to fix? And if not, what's the reason for this huge difference?" Yeah, I think this is something you'd probably want to pick up with the analytics team to see what you're seeing there-- what they're showing in their system as well. "I've been working for this website-- [INAUDIBLE] web site metatags-- sidelanes information are not showing up in search results. I have requested they crawl the domain and Google Search shows the old data. Googlebot is not crawling and indexing our domain." I don't know. That sounds like something I'd have to take a look at separately. In general, usually we would still be crawling. So this is something where I'd check Webmaster Tools that the crawl rate setting is set appropriately-- that we could still crawl your content-- that you're not blocking us with robots.txt or with server errors-- any temporary errors that we're seeing. Those are kind of the technical things I'd watch out for there. "A website is mobile friendly. But search results say it's not. Here's the website. And the question-- Webmaster Tools forums. I have copied that onto the site so I can take a look at the forum thread afterwards." In general, one of the things we've noticed with regards to a lot of sites when they're mobile-friendly, but we can't pick that up as mobile-friendly with our testing tools, is that maybe some of the content-- the CSS or the JavaScript-- is being blocked by the robots.txt file. And if we can't crawl the CSS-- JavaScript images, for example, then we can't always tell that it's actually a mobile-friendly website. So that's a common reason there-- something that I'd look out for and make sure that it's not happening. I'll double check in the forum thread though to see if there's anything specific to add there. All right, we still have a couple minutes left. What's on your mind? Yeah I'm going to go ahead. Really quick things-- I hope that the breadcrumbs URLs issue can be checked. I hope you got them. Regarding soft 404s, we actually use a method on actually the same site where we count the number of products on a category and if that is zero, we add the no-index. So for example, the owner of the website wants to create the categories because he knows he'll be adding products in the future. So that's no issue if he has a product that no-index will be automatically removed. That's no problem? Perfect. OK, and since you already reached a new feature of Webmaster Tools search analytics, what's next in store. What can we expect for the next few months coming as the features? I don't have anything to announce sorry. I think there might be some interesting things coming up. One of the things we've been working on is a lot with app indexing. So I did a submission form for people who have Android apps and are using app indexing and would like to try some things out. So if you're doing that, I appreciate your submissions. And we'll get you to try some things out there. But at the moment, I don't have anything else that I can really pre-announce for you, sorry. All right, OK, cool. All right, more questions-- everything clear? John, I think what people have mentioned about the doorway pages algorithm is that there was such a small difference, or at least that's the way it appeared. Maybe that's similar to what you were talking about earlier with the mobile-friendly algorithm. But many people are still seeing a lot of doorway page type of content. So is that something where a future fine-tuning with picking that up. Yeah, I think it's something where having feedback from you guys where you're still seeing this happening would be really useful. I know the team that has worked on this. They've try to make sure that we're really picking up the right things. And maybe that's too limited of a scope. Maybe they need to broaden that out a little bit more. But having really concrete feedback on what you're seeing-- where you're saying, well, this is probably not that great. Google should do a better job of recognizing these are all the same-- maybe folding them together or maybe demoting these sites a little bit more so that the actual content is actually more visible. That's something where your feedback will be really useful. So especially if you're saying people are still seeing these in the search results, that's something we'd love to clean up. And if you are seeing these things in search results, by all means, send me screenshots, send me queries, send me sites, and I could pass that on to your team so that they can discuss this to see where they need to fine tune things for the future. Would user reactions to these types of pages have a significant effect so that if particular doorway pages maybe had a significant amount of content although it was very much a duplicate to a lot of other pages and just the city name was changed et cetera? But maybe that was especially popular in a particular area for particular queries so that it wouldn't necessarily be targeted. Or would that be outside of that realm? I don't know specifically how you mean that. But the feedback that we get from users about these search results especially through the feedback link on the bottom of the search results-- that's something we do to take into account. That's something that the search team gets forwarded as well so that they can work on improving the algorithms there. So from that point of view, that's definitely something that happens. All right, thanks. All right, so with that, we're out of time. I'll set up to the new Hangouts for I think in two weeks. And maybe we'll see you again on one of those. Thanks for dropping by. And I wish you all a great weekend and good start into next week as well. Thank you. Bye everyone.