High-Level Take-Aways from Matt and Oscar’s Talk


Matt Woods and Oscar Salazaar (Application Security Group at HP)

They will send their presentation — look for the link  –> here.  It has details on all these examples.  There was a lot of good discussion, so please add comments to this post.

They have down-loadable tools to help check for website vulnerabilities.   A lot can be done with static analysis.

  1. The first thing to do is to look at the source code on the web page.  For example, there is a “load compressed file” command, so what else can I get them to load?  A configuration file?  Executable code?  This is useful information for a hacker:  you can put code into a web page and execute it.
  2. The Web Hacker Rule:  You can modify everything you send to  a web server, so explore and experiment to find out what you can do.
  3. Google Hacking Vectors
    1. Hacking (e.g., the xss demo; SQL injection)
    2. Misplaced Trust
    3. Resource Enumeration
    4. Session Hijacking
    5. Parameter Manipulation
  4. Client-side validation is ridiculous.  Easily defeated
  5. Clever evasive maneuvers without an underlying model is very dangerous (compare this to what we talk about in operaating system security: adhoc security through obscurity does not work).  You can’t just assume because you’ve done something complicated that the hacker will not figure it out. For example, hiding “script” in “scrscriptipt” is discovered by grepping script.

9 Responses to “High-Level Take-Aways from Matt and Oscar’s Talk”

  1. Andrew Mishoe Says:

    I wondered if Google could be liable at all in returning as a search result things such as .xls files of usernames and passwords. Obviously the company or whoever should be more responsible and not put it on the web, but does it bring up any security concerns when google bots bring up results like this?

  2. Michael Qin Says:

    I hope Google would not be liable in such a situation. By extension, we would be holding Google responsible for anything that might be the result of a search query, but Google probably can’t filter out all the secrets people decide to post on the web.

    This was a really awesome presentation, and I hope we’ll have the slides soon. It was great to see some topics we discussed in class used to as actual exploits.

  3. Andrew Mishoe Says:

    well we all know Google can filter 🙂 just ask china.

    I’m just picturing the search bots returning the content of every file that is posted on your domain. Obviously that isn’t the case, but I just wonder how this stuff gets on the web in the first place.

  4. Michael Qin Says:

    Haha, that’s a good point. When I was in China, I ran a Google search on Tiananmen Square for fun, and it was very different from what the searches here turned up. But this represents a blanket filter against undesirable results, and that might be easier to implement and maintain.
    I think an interesting scenario could be if a password file is stolen and posted by unauthorized parties. Then would Google be obligated not to return the result? I would still think that it would be difficult to filter out individual instances like this.

  5. Kenny Franks Says:

    I know that, like most search engines, Google respects the rules of robots.txt. So, if a legitimate site is set up properly, the search engines will not index private information. Of course if you make a case with the powers that be at Google, you can have any url or cache removed provided you have the proper authority (wether it be a webmaster or law enforcement official). This chart, published by Google, gives some information on government requests for removal of content by country: http://www.google.com/governmentrequests/
    Pretty interesting to see which countries have the highest number of requests.

  6. Abhishek Chhikara Says:

    I think its the responsibility of the people who post such information, if not Google there are other search engines which can be used to do the same, so there is no single search engine on which the liability could be put on.

  7. Antonio Says:

    I don’t think Google should be held liable for information people can find through them. I agree with Abhishek the companies and individuals who upload the information should be held liable.

  8. Rohit Sinha Says:

    Well, this actually kind of brings up the whole thing about torrent search engines as well. There have been many popular torrent search engines that just index what torrents are available on what website. Even though some of the content that is available on the website is completely legal to redistribute through any method, they have had a lot of legal issues. Google also can be used as a torrent search engine, if you search for anything that you would like to acquire and then append the word torrent to it, you will be returned tons of websites that will provide you with the torrents to whatever you are looking for. So in what sense is Google really any different to those other websites?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: