Wednesday, November 30, 2011

Monday, November 28, 2011

My Final Blog and Muddy Point.

I think is the last one I need to have the required amount. It's been a wild ride. Haha...

Anyway, I'd like to write for a bit about the usefulness of the wiki as a library tool. The greatest asset of the wiki is also its greatest downfall. The wiki depends upon people to actually update it. If no one uses a wiki, it becomes a worthless resource and can actually lead to more problems than might have occurred in its absence. Once a library sets out to use a wiki as a method of recording and providing information either in terms of merely its own employees or to the patrons, it can lead to everyone assuming that someone else will be the one who updates it. This is based on personal experience. Using a wiki without having a clear schedule of people responsible for updating it with regularity, can lead to a situation in which changes to library policy have not been tracked or recorded...or at least not on the wiki. This can lead to employees consulting an out of date document and lead to the inadvertent violation of library policy.

I find the idea of introducing social tagging into the library environment to be intriguing. I think that it should be used as a method of bolstering the preexisting library cataloging system and associated metadata. Barring any issues of intentional abuse, this could greatly improve search and retrieval ability and provide patrons with higher quality service.

Muddy Point: I have no muddy point. It's all clear skies this week!

Monday, November 14, 2011

Week 12 Blog Post and Muddy Point

The surface web consists of sites that have been indexed by spider crawling automated programs that seek out sites that are static and linked to multiple other pages. This is the portion of the Internet that most of us are familiar with but there is another portion of the web called the 'Deep Web' that is difficult to measure because it can only be searched and retrieved by entering the precisely correct query into an engine. Much of the Internet falls into the category of "Deep Web". BrightPlanet is an engine developed to be capable of running multiple simultaneous searches in an attempt to make deep web content quantifiable and accessible. It found that the deep web is 2,000 times larger than the world wide web that most of us are familiar with. Deep web sites are more frequently used than surface sites but are less well known. I don't understand how that makes sense. Even Google is only searching through 1 in every 3,000 pages available. It's amazing to think how much more access and accurate retrieval of information we could have with even 50% efficiency. Considering how convenient and available information seems now, it's exciting to imagine a world in which information could be hundreds of times more retrievable. I have to admit that I never imagined that there was still that much untapped potential even with the current generation of technology. Given the speed that storage and transmission technologies advance, I wonder how likely it is that searching and retrieval will ever come close to bridging the gap of this vast unused potential.

Muddy Point: No Muddy Point from last week's stuff.

Monday, November 7, 2011

Week 11 Blog and Muddy Point

1993-1994:
The National Science Foundation held planning workshops on digital libraries.

1994:
Digital library research gains its first Federal funding under the Digital Library Initiative DL1. It consisted of 6 projects funded jointly by the National Science Foundation, NASA, and the Defense Advanced Research Projects Agency:
  1. University of Michigan: Research into improving secondary education through the use of agent technologies.
  2. Stanford University: Research on digital library interoperability.
  3. University of California-Berkeley: Research on imaging and database technologies.
  4. University of California-Santa Barbara: Alexandria Project to develop Geographical Information Systems.
  5. Carnegie Mellon University: Research into integrated audio, video, and language recognition software.
  6. University of Illinois Urbana-Champagne: Developing protocols for full-text journals.
1998: The DL2 program began. It was funded jointly by the NSF, NASA, DARPA, the National Library of Medicine, the Library of Congress, the FBI, and the National Endowment for Humanities.

DL1 and DL2 received $68 million in Federal money between 1994 and 1999.

Google grew out of technologies developed at Stanford's DL1 research.

The creation of digital libraries merged the fields of Library Science and Computer science into what would eventually become the field we study today: Information Science.

Muddy Point: Dr. He did a great job of clarifying my confusion about XML in lecture last week. No Muddy Point.

Wednesday, November 2, 2011

Sunday, October 30, 2011

Week 10

XML is a variant of SGML, or Standardized markup language. I'm not certain if I'm quite understanding it correctly, but I'll try to explain what I think it does. XML facilitates the transmission of data online. It can use DTD's(document type definitions) to ensure that there are no errors in the document formatting. However, XML does not have to use DTD's. It can assign a kind of default DTD of its own to components of a document that have not been labeled. I'm not really sure what that means at all. Does that mean it just tags it as unlabeled, or does it have the ability to discern to some extent what part of a document something was meant to be? I'm very confused about this. It seems to be saying that XML is not a markup language but that it is a formal language that can break down components of a document into different elements based on certain logical queues. I don't understand how that's different from HTML coding that breaks a document up into headings and paragraphs. It might be that XML is just far more specific and has a wider variety of elements. I think I'm going to be relying a lot on Dr. He's lecture to help me to understand this. I'm really not seeing what the difference between XML and other languages that we've looked at is. The readings seem to be saying that XML helps to link digital documents together and to define them in a more detailed manner but I don't understand how or what it means. I may end up with quite a few Muddy Points after class this Thursday.

Muddy Point: No muddy point from last week. Dr. He did a great job of explaining CSS. Hopefully, XML will make much more sense after this coming week's lecture as well.