Comments on: How do you evaluate a database? https://force11.org/post/how-do-you-evaluate-a-database/ The Future of Research Communications and e-Scholarship Thu, 26 May 2022 13:41:17 +0000 hourly 1 By: Paola Di Maio https://force11.org/post/how-do-you-evaluate-a-database/#comment-11023 Sun, 05 May 2013 18:12:25 +0000 https://staging2.simonw59.sg-host.com/how-do-you-evaluate-a-database/#comment-11023 Conformance to Reqs and Quality Model?

Thanks a lot for the interesting question, Maryanne

I hope not to be mistaken for a waterfall type, for mentioning conformance to requirements 🙂

which is one of the classic metrics for quality evaluation, in systems engineering

 

Also, a quality model is set up either upfront, or retrospectively, conformance to which evaluates quality

A QM has a multiplicity of dimesions and perspectives, ideally specified  by the widest possible stakeholder basis!

 

havent seent this much in the links already provided in the other comments-

best

 

PDM

]]>
By: Maryann Martone https://force11.org/post/how-do-you-evaluate-a-database/#comment-11022 Sun, 05 May 2013 00:39:31 +0000 https://staging2.simonw59.sg-host.com/how-do-you-evaluate-a-database/#comment-11022 In reply to Kevin Hawkins.

Thank Kevin.  These are very useful and I think we need to establish the same for the sciences.  I liked this guideline from the idhmc document:

"Approximating Equivalencies: Is a digital research project “equivalent” to a book published by a university press, an edited volume, a research article, or something else?  These sorts of questions are often misguided since they are predicated on comparing fundamentally different knowledge artifacts and, perhaps more problematically, consider print publications as the norm and benchmark from which to measure all other work.  Reviewers should be able to assess the significance of the digital work based on a number of factors: the quality and quantity of the research that contributed to the project; the length of time spent and the kind of intellectual investment of the creators and contributors; the range, depth, and forms of the content types and the ways in which this content is presented; and the nature of the authorship and publication process."

]]>
By: Kevin Hawkins https://force11.org/post/how-do-you-evaluate-a-database/#comment-11021 Sat, 04 May 2013 14:38:21 +0000 https://staging2.simonw59.sg-host.com/how-do-you-evaluate-a-database/#comment-11021 evaluating digital scholarship

The question of how to evaluate something that isn't a journal article or book has been a preoccupation of digital humanities practitioners for some time.  Some of the question that these scholars have attempted to give guidance on relates to the narrow question of how to evaluate a database.  Here is a hastily assembled list of links on the topic:

http://www.mla.org/guidelines_evaluation_digital

http://idhmc.tamu.edu/commentpress/digital-scholarship/

http://institutes.nines.org/docs/2011-documents/guidelines-for-promotion-and-tenure-committees-in-judging-digital-work/

http://www.mlajournals.org/toc/prof/2011/1 (see the "Evaluating Digital Scholarship" section of this issue, which is available through open access)

https://library.osu.edu/blogs/digitalscholarship/category/evaluating-digital-scholarship/
 

]]>
By: Melissa Haendel https://force11.org/post/how-do-you-evaluate-a-database/#comment-11020 Sat, 04 May 2013 14:10:58 +0000 https://staging2.simonw59.sg-host.com/how-do-you-evaluate-a-database/#comment-11020 “other” scholarly entities

It is fantastic that our community has been able to push funding agencies to use entities such as databases as scholarly products of our scientific, work as evidenced by the new NSF biosketch guidelines (yay NSF). However, not only do we require unique IDs for such entities so that we can reference them in other places within the scholarly communication cycle and build tooling around them, but we need to provide guidance as to how such things should be evaluated. For all too long, informatics resources have been viewed by review panels as not innovative enough. Yet scientists are as dependent on them as they are brown paper towels. Similar to brown paper towels, scientists use such resources in innovative ways on a daily basis. So how do we change program officers and review panels consideration of such resources to have different criteria than those used for traditional hypothesis-driven research? In a world where "big data" is a hot topic, our funding agencies need a new mechanism to review resouces and the persons that build them that enable biologists to take advantage of all of this data. How can such resources or people not be innovative?

How many of you now list such things as ontologies, databases, code, on your CV? Do it now! Perhaps one thing we can do is to change how we are evaluated one CV at a time. But this will only help show our innovation to our evaluators, it doesn't help with the evaluation of the databases themselves, either their use, functioning, or even proposal to build new resources. How is a review panel to know that a new resource is needed based on the evaluation and landscape of existing resources? For new resources, shouldn't one have an adoption plan? an integration plan? 

]]>