| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Archivists' Toolkit User Feedback

This version was saved 15 years, 4 months ago View current version     Page history
Saved by Lisa Spiro
on November 17, 2008 at 6:13:13 am
 

Archivists’ Feedback on Archivists’ Toolkit

 

In order to understand how archivists use Archivists’ Toolkit, I conducted phone interviews with 5 archivists between May and July of 2008. To encourage complete honesty, I promised anonymity to the interviewees.  I tried to capture the interviewees’ remarks as accurately as possible, but I paraphrased and/or condensed some comments.

 

Reasons why archives selected Archivists’ Toolkit (AT)

 

  • “The initial attraction is that we have a lot of tools in place for archival description and collection management, but they’re separate, distinct tools, data silos.  We had the accessioning database separated from EAD database, along with a separate ILS, separate database for A/V and photos, etc.  The different databases were not integrated for end users, just for the workflow point of view.  People needed to learn various tools. It was difficult to reuse data because exporting demanded trying to cram it into whatever format the database was using.”

  • “We didn’t have a budget to purchase anything. We probably could have designed our own database, but we couldn’t have designed it to do everything that AT does.  We could have customized things to meet past practice, but also decided to move away from old practices. We don’t want to be too flexible any more.  Also, it was appealing that we could have input into development process as beta testers.”

  • “Our interest in AT is a function of where we’ve been with managing descriptive information and collection information--the information was all over the place.  Some descriptive information resides in the card catalog, library OPAC, and paper finding aids, and some in combinations.  Accessions information until recently was done in paper form only, which made it difficult from a reference standpoint to locate that information quickly.  We built a small database in InMagic ca 1998. Location information is still managed in an Access database.  All of that information was all over the place and still pretty much is.  We had to look in all of those places and had to keep those systems up.  What I liked about AT was it was free, I knew some of the people involved in building it and trusted their judgment, and I felt like they built it with a lot of input from archival community, which has its pros and cons—it slows down development time, but hopefully it meets as many needs of community as possible.  With the latest upgrade, they’ve added new stuff.  Based on AT’s recent survey, they’re pushing at areas that we would like to see added to it.  But we’re still struggling to fill in data for features they already have.  Looking at it and seeing it demoed, it looked easy to use—and it is, particularly if you are familiar with archival terminology and descriptive fields.  I liked the thought that we would be able to link our accessions information to our descriptive information.  I liked that we could output easily to EAD.  Our old system involved a lot of manual work. Now we can quickly spit out EAD or MARC.  I haven’t done much with the print version of output yet but I think they are making improvements to that.  That’s another feature that’s nice on the descriptive end.”

  • “This is the first thing I’ve seen since AIMS (?) in the mid 1990s that links accessions to collections and allows you to search accessions easily for stuff that’s unprocessed.  One of the features in upcoming releases is the user tracking as well.  Once we fully implement AT, we’ll be able to eliminate other resources, especially InMagic, which probably won’t be supported on future operating systems.  I think it will reduce descriptive overhead for archives.”

  • “We had been looking for a management tool that would help us do some of the basic function of an archive, such as managing our accessions, having a name authority & subject, and having some way of integrating finding aids into one tool. We’ve really been testing AT ever since learning about it.  We’ve implemented parts of it fully, especially the accessions module.  We are looking at or getting to point of implementing the authority module fully.  We are still hesitating on the resource module, the place where we would import legacy EAD documents & create new EAD documents right in AT, export them, run them through our stylesheet—we’re still testing that.  We’re hesitating because our legacy EAD documents are so diverse and weird.  We have tested importing legacy docs and have seen what they come out like.  AT is doing a lot better now with importing with 1.1, so we’re looking at going ahead and importing them.  We need a stylesheet that works with exported AT finding aids and we haven’t quite that got yet.  One of the things that we are considering is importing MARC records instead of the whole EAD, which would get around importing issues, but would also give us all the benefit of having our resource module linked up to accessions.  There’s a way in AT to link accessions to resources backwards & forwards—there’s so much advantageous for us to have those resources in there that maybe a simple MARC record would be plenty for us to get subjects imported.”

  • “There weren’t a lot of archival management tools out there—we were looking more at database formats that were more or less home grown.  When I did research in 2005, I researched database structures in EAD and how things worked for people.  I found a lot of different archives that had home grown structures and found out about their limitations—we didn’t adopt any of those.  We did hear about Archon and kind of considered that along with AT, but at the time it didn’t seem to have as many possibilities as AT had for us.  It didn’t at that time have a way of managing accessions—it was more a finding aid creation tool for small archives.  And now it’s expanded a little.  What concerned us a little about Archon is that it didn’t have ongoing grant support.  We saw enough people adopting AT and felt that it had the solidity of ongoing grant support.”

  • “Previously we were using Access.  There was no real way to get EAD out of Access, and we wanted to get finding aids on the Web.  We were pretty pleased with what AT offered, especially EAD export.  We have to abide by the Online Archive of California’s guidelines, so we needed to make some modifications to what AT exports to conform.”

  • “We’re using AT as a collections management systems—we’re not using the ability to produce finding aids.  Within you AT have a resource record and component record (for multi-level descriptions)—a couple different levels.  We’re using it at the highest level to manage accessions and information about local collections.”

Ease of Use

  • “As someone who has taught an AT workshop twice, I can say that people pick it up pretty quickly.  It does the basic things people need, and it’s easy to use for archivists who know what they need to do with archival description.  Someone who wasn’t trained as an archivist had some problems with it; it’s set up with the assumption that you are an archivist.”

  • “We’ve been using the Resource module selectively.  A few people have used it for finding aids because there have been special circumstances, such as needing to work offsite, and it would have been difficult to set up our institutional macros and template.  It worked out pretty well—we could help them get stuff online by exporting data from the Toolkit.”

  • “Archon & AT offer a good alternative to hand encoding.  We couldn’t have trained [staff with a lack of technical expertise] in reasonable time frame to produce what they did with the Toolkit.” 

  • “That’s a hard question.  It’s not too difficult to use if you just need someone to input data into it.  It’s pretty simple to get students to point to input data.  But there needs to be someone in the department with a more thorough understanding of the program and how things work.  Some things will need to be adjusted after the stuff is input--otherwise you will run into databases that are not very standardized.  The learning curve for all of the features of AT is pretty steep— it took me a month or two to get comfortable with it.  Even now I’m learning new things, such as digital object description or linking internally.  I’ve trained staff and 2 interns how to input into it. They get information into AT, then I change things.  Much stems from the hierarchical structure.  It’s intuitive but confusing when setting things up.  I have issues where they try to add a file to a box.  In AT it’s not clear if a file is in box or equal to box.  I have issues with structure and how AT displays it.   As for training, I did a 1 time 2-hour session for staff. Some picked it up quickly and jumped in, others took more time to get comfortable with inputting stuff into AT.”

  • “We have a lot of students working with data entry here.  It’s always a question of how much to give them.  In my mind, the bigger question is how much organization of a collection can a student do.  The students I’ve used are mostly undergrads doing data entry for legacy finding aids.  They’ve been able to pick up on that.  Most of them are fairly computer literate—the bigger issue is not boring them and making sure they pay attention to detail. What level of description you can train students to handle?”

  • “Seeing a tool like AT makes me wish I was starting an archives from scratch.  Getting all of the old data into AT or any system is a challenge.  We’re doing it piecemeal.  Right now our main use is on resources end, descriptive information, particularly for manuscripts at the collection level so that we can output to EAD and MARC.  We’re fairly far behind still with descriptive information so that’s our big focus for this year.  The plan is to get our accessions process in place at the beginning of next year.  The trick is mapping our fields in our old database into the new database.  With the new version of AT, they’ve got user defined fields that will accept some of our oddball information—purchase price, appraisal value, in-house estimate of gift value, etc.  But there are some data issues that are not straightforward that don’t map well, such as hard returns in descriptive fields, which cut the data off so that it doesn’t come across cleanly.  There will be a fair amount of data cleanup to do to get it in there.  Once the legacy data is in there, I don’t foresee any challenges to staff learning to use the system, either to input or search data.  I understand that getting accessions information is a challenge for everyone.  It’s a little bit of a challenge with the descriptive information.  It was stuff that was cataloged by many different people over many different eras using many different standards or none at all.  Cleaning it up will take time—but there are no significant challenges from the system itself.  It seems to do everything we want it to do.”   

     

Installation and Maintenance

  • “Installation depends on how you set it up. We have the backend—My SQL—set up on a server so various people can connect to it.  Getting it set up in a networked environment took coordination from IT staff.  Once that happened, it was smooth.  We installed it on laptops during testing, and that’s been fairly quick.”

  •  “Installation was pretty easy.  We have a small systems department.  One of our systems persons installed it.  We just upgraded it and that was like installing any piece of software.  I don’t know what would happen now that we have user defined fields—what information would be lost with an upgrade. So far installing it has been a piece of cake.”

  • “We had our systems department do the installation.  On the listserv you see people with issues with My SQL.  Our systems staff didn’t have any problems with the installation.  We haven’t had any problems with the database.  IT staff have moved it around a lot [onto different hardware], and it’s been pretty seamless.”

     

Ease of Customization

  • “There are built-in customization features.  You can change labels of different fields, provide instructions or guidelines, etc.  We have added look-up lists to add specific data and options.”

  • “In order to customize local use, you don’t need a programmer, just a set of guidelines to say, ‘on this screen, fill out these fields.’  For CLIR, it will be important for each repository to do the intellectual work up front of giving grad students good guidelines about how to formulate data.  A lot of data is not in controlled vocabularies; there is a lot more loosey-goosey notes stuff.  You don’t want to leave grad students up to own devices to put what they want where.”

     

User Community

  • “There’s a great AT users group listserv that is quite active where people ask and answer questions. We report bugs through the bug reporting system.  We’ve found the developers to be extremely responsive to our concerns all the way through, ever since the beta testing period.  We’re very pleased with that altogether; there’s a really good network of users built up.”

  • “The big thing about AT that will be interesting is that it will be leap of faith for institutions because it isn’t clear what the sustainability trajectory will be for it.  We’re hoping and betting that it’s not just going to go away because we’re moving a lot of data into it.”

  • “My experience with the user support has been excellent.  The listserv seems very active and people don’t seem afraid to ask questions.  You get a variety of people from AT responding to it.  They seem to respond quickly, and they seem to all be on the same page.  There’s not a lot of confusing dialogues.  They seem to be able to handle both complex technical questions and simple questions.  The manual that they created works well for me.  The bug list that they put out is both helpful and confusing.  They have a quickie style of documenting all of these problems.  If you spend a few minutes, you can see that a problem has come up before.  That sort of transparency about what the bugs are and how they are addressing them is a helpful feature.  They have been active about doing presentations both at national and regional level.  Without a huge budget, they’ve managed to do a lot of communication with interested users.”

     

Weaknesses of Archivists’ Toolkit

  • Potential problems with upgrading to new version of AT after making customizations.

  • Archivist Toolkit may be challenging for less technical staff to use.  As one archivist commented, “AT is great project. I evaluated it and didn’t think that it would be as easy for archivists I know with limited technical skills to get it running and use it.  It was a little too technical and required too much IT expertise to get the most mileage out of it.”

  • Lacks a public web interface that would enable the public to search collections.

  • May not work with existing workflows: “We do use the resource module for some stuff here, but our general workflow predates the Toolkit.”

  • “There are still some bugs—it’s still not perfect, so some data may not be saved properly.”

  • “There’s nothing about it that has driven me crazy.  The stuff that drives me crazy is that we have so much catching up to do and so few staff.  AT is a significant improvement as a tool that helps us to get stuff done.  I would like to see it link to user information.  User tracking in AT would be good for part of our collections but it’s not really a holistic solution to knowing what people are using and where we should put our resources.  But we have so much catching up to do that we’re not ready to implement that any time soon anyway.”

  • “In terms of resource description, I like it a good amount.  The complaint I hear from my staff I disagree with.  People say that it’s too clunky, it has too many fields, and you have to separate data into fields—to me, that’s good.  People have gotten used to working in a Word document, without structured data.  AT imposes restrictions, so it’s more of a mindset of getting people used to thinking in a different way about what they’re doing in describing archival material.”

  • “Some more collection management tools would be nice, like doing stuff with processing priorities, ranking research value, current status of processing, level of description.  There’s currently no way to track that within AT.”

  • “It’s hard for multiple people to work on describing one resource at the same time.  They’re working on that in the next release: to merge different resource descriptions.  If you are working on a huge collection with several boxes, it would be good to have people working on same collection at same time.”

  • “The exporting of EAD for AT is good; the exporting of MARC is pretty good but not quite as granular as needs to be.  It would be nice to have something that mapped to Encoded Archival Context for name records.”

  • “There were a few minor buggy issues we had with the first version particularly with dragging things around, but those seem to be gone now.  There are a few issues with this version where it seems to time out and lose data.  Someone was working on collection, had the resource window open for half an hour, and lost the data.  The save functionality could be better so that you could save and still remain in the window.  Now we save a lot.”

  • “I’d like the ability to rank collections, track processing priorities, states of collections, preservation, level of arrangement and description.”

  • “In general, I think the connection between the accessioning and resource modules could be a little stronger.”

  • “The problem with the import of legacy EAD is probably our biggest hurdle.”

  • “There are lots of places to put information in and you want to fill in every blank. You have to stop yourself from doing that and make sure that you’re entering what you need to and what’s necessary to create complete, valid documents that are DACS compatible.”

  • “The big challenge with AT is that it leaves a lot of options open to the user.  You have to make choices, and there are lots of different notes available to you.  What a grad student would need is for someone to say this is what we want to do—that is, there should be guidelines locally to say how you work with his.  You wouldn’t want to build the constraints into the software.”

     

Strengths of AT

  • “The accessioning module is better than anything out there or that we could develop on own.  We implemented the accessioning module first, and it’s pretty much what we’re using now.”

  • “The promise of having a single database for collection management.  You do the accession record, push a button, convert to a resource record, and export as EAD and MARC.  It’s not quite there yet, but it’s moving in that direction.”

  • “I actually like the fact that it is a database where people are forced to separate different data elements—it helps to standardize data and produce finding aids quickly.”

  • “AT makes it quicker to produce finding aids.”

  • “I haven’t found anything better, particularly for the price.  It’s a noble effort by members of our profession to fill a gap.  It seems that they’ve gone about it the right way.  Of the free products out there, they’ve got a good shot at keeping it going, particularly with the amount of implementations of it out there.”

  • “For collection management, I like the ability to produce reports about size of collections, different types, etc.”

  • “We’re very pleased with the accessions module and have been using some of the user defined fields for our special needs.  For example, we have needed a place where we were able to record material types in each accession, broad material types, whether visual, papers, digital items, etc.  We have used one of the user-defined fields to enter that information.  That will provide for us a way to use AT more as a processing planning tool.  There’s a way to note whether each accession is high, medium or low priority; we’re entering that information, so we can go through and find all of the high priority processing accessions in our collections and plan our processing from that.  We’re hopeful that once we get that information entered into AT we can more fully use it as a processing/planning tool. Marking various material types will help people who are in charge of different media types—paper, digital, visual, etc.  We can find all of those collections that belong to us and that way we can use the accessions records.  Our legacy accessions database didn’t have a way to transfer locations directly into AT locations area, so we have to manually go in and enter all of the locations that we have noted, but once we have done that we can use it as a locations guide, so it’s going to be and is already an excellent tool for us.  When you consider that we had very rudimentary accessions database in 2004, we’ve come a long way.”

  • “AT would be helpful for processing hidden collections. Right now we are doing clean up of our accessions database.  As we’re putting locations into locations guides, I’m finding some high priority, ‘hidden” collections.”

  • “AT has a business plan;  there is a plan for ongoing operations which encourages us.  And for us, personally, we have IT support here which is really good.  Should AT not become sustainable in future, we have ways in which we can seek IT support to sustain it on an ongoing basis ourselves.  Sustainability is not so much of a concern for us as it might be for smaller archives.  But too, AT is looking at that and managing it pretty well for an open source tool.”

  • “We’re finally getting a place to put name and subject authority files and are really glad that we’re finally getting a complete accessions database. All of that information is all linked up, names are linked up with accessions and resources.  It’s a great tool.”

  • “It’s going to be a great way to plan processing.  It’s one thing that has made our archives move forward with all of our management for our archives.”

  • “I think AT works really well.  We had been sort of thinking about using it to play around with producing METS digital objects.  We’re in process of doing mass digitization of archival collections—digitizing stuff at folder level and linking METS objects to finding aids and are figuring out how to create METS objects.  The Toolkit is one of the things on the table.  They’re supposed to be working on new functionality.  Now you have to build whole resource description from collection to folder level before you can build digital object, but you will be able to build METS object that isn’t connected to anything at folder level.”

  • “Not many tools are easy to use by people not trained in XML.  If libraries have to train everyone who is working with collections to use XML, it will be challenging to roll out. In an XML editor, you don’t get a nice tree view; you have to do special things to produce that view.  In AT, they are building it so that you have metadata and visual screen that shows you where you are in the structure.  In workshops, most archivists felt confident at end of 2 days in their ability to implement the tool.  We need tools that work more like word processors and visually let you see where information is.”

  • “Ease of creating our resource descriptions.  EAD export has worked fairly well for us. It seems pretty intuitive to use. It’s cut down a lot of work for us in getting things into EAD or MARC.”

  • “Down the road, I’m looking forward to having accessions and collection information interacting more.”

Comments (0)

You don't have permission to comment on this page.