Write a Ruby program to scrape and process some interesting data from a publicly accessible OSU site. Some possibilities are:
For example, your program might search the jobsatosu site for all job postings related to HTML or CSS. You could even use a cron job (see cron and crontab) to run the script, so that every morning your script querries for particular job postings, creates a digest, and sends you an email with the results.
As another example, you could automate the process of figuring out how many credits each course in a given list of courses is worth. For example, given a text file with the course numbers of all the GEC courses that count in the "Social Science" category, your tool would return the credit load of each individual course!
You can use whatever source for data you like, provided it is an OSU source and is publicly visible. If you are doing many successive querries, you must space them out in time (eg using sleep between successive GETs) so as to not overload the server. You can do whatever scraping and processing you like, provided the result is interesting and doesn't violate any department or university policies.
Note: A gem such as Mechanize can be very helpful for issuing HTTP requests, and a gem such as Nokogiri can help navigate the body of the response to find information of interest. Together, these two gems can greatly simplify your web scraping task. For instructions on how to make use of gems, see this reference.