Scraperwiki Twitter Query -


please forgive me, have limited knowledge of scraperwiki , twitter mining.

i have following code scrape twitter data. however, want edit code give me results geotagged new york on particular date (let's say, april 1, 2013). know how should this?

############################################################################### # twitter srcaper term 'hello'. ###############################################################################  import scraperwiki import simplejson  # retrieve page base_url = 'http://search.twitter.com/search.json?q=' q = 'hello' options = '&rpp=10&page=' page = 1  while 1:     try:         url = base_url + q + options + str(page)         html = scraperwiki.scrape(url)         #print html         soup = simplejson.loads(html)         result in soup['results']:             data = {}             data['id'] = result['id']             data['text'] = result['text']             data['from_user'] = result['from_user']             data['created_at'] = result['created_at']             # save records datastore             scraperwiki.datastore.save(["id"], data)         page = page + 1     except:         print str(page) + ' pages scraped'         break 

in addition q, use query parameters geocode , until. see page of twitter api documentation. please note cannot use search api find tweets older about week.

besides, it's easier use urllib.urlencode() construct query, example

query_dict = {'q':'search term(s)', 'geocode':'37.781157,-122.398720,25mi', 'until':'2013-05-10'} query = urllib.urlencode(query_dict) response = urllib.urlopen(basic_url + query).read() 

update: please see this example scraper can copy , adapt needs.


Comments

Popular posts from this blog

linux - xterm copying to CLIPBOARD using copy-selection causes automatic updating of CLIPBOARD upon mouse selection -

qt - Errors in generated MOC files for QT5 from cmake -