Difference between revisions of "DAS1.6E"
(→DAS search) |
(→DAS search) |
||
Line 226: | Line 226: | ||
| In case the feature has a label | | In case the feature has a label | ||
| featureLabel:"ABC transporter" | | featureLabel:"ABC transporter" | ||
+ | |- | ||
+ | | segmentId | ||
+ | | The Id of the segment | ||
+ | | segmentId:P05701 | ||
+ | |- | ||
+ | | segmentLabel | ||
+ | | In case the segment has a label | ||
+ | | segmentLabel:P53 | ||
+ | |- | ||
+ | | segmentStart | ||
+ | | Using the start coordinate of the segment | ||
+ | | start:1000 | ||
+ | |- | ||
+ | | segmentStop | ||
+ | | Using the stop coordinate of the segment | ||
+ | | stop:2000 | ||
|- | |- | ||
| typeId | | typeId |
Revision as of 14:12, 8 December 2010
Details of existing and proposed Extensions to the DAS 1.6 specification.
Contents
Existing Extensions
These extensions have undergone development and have solid implementations.
Alignment
This extension is a new command, carried forward from its equivalent extension in the DAS 1.53E extended specification with some modifications. An official specification for the extension under DAS 1.6 was proposed, but was removed in draft 7. Instead a redesign of the query mechanism is proposed in order to better accommodate genomic alignments. For posterity, draft 6 contains the last version before the specification was removed (see the DAS 1.6 page for details of drafts). Discussion should be directed to the mailing list (see the Community Portal page for details).
Interaction
The DASMI extension expands DAS to apply to molecular interactions. It is part of the DAS 1.53E specification (i.e. is an existing extension to the 1.53 specification). It is detailed here.
DAS writeback
June 10,2009
This is a working document and a proposal for an extension to the DAS 1.6 specification in order to support writeback capabilities. A writeback server should have, at least, the methods for the basic reading/writing operations, which in Databases is recognized as CRUD(Create, Read, Update and Delete). The reading component is already solved for the DAS protocol, even is possible to affirm that all the current commands in the specification are reading commands for different kind of information.
One or more Writeback servers can be associated to a coordinate system, however just one coordinate system can be handle for a writeback source, and in such a way be sure to identify the feature with the correct segment of the right specie.
DAS is partially following the concepts of RESTful services, and that is the reason why most of this proposal is inspired in the RESTful concept of Uniform Interface, which means that all the resources should be manipulated using a predefined set of operations. DAS use HTTP(see RFC2616) as its communication protocol, therefore the logic Uniform Interface for DAS is the use of the HTTP methods PUT, GET, POST and DELETE to access the 4 CRUD operations. Next, is explained the proposed details to use this methods with DAS, but first a description of the das writeback document.
DAS Writeback Document
The document used for all the methods is an XML-formatted "DASGFF" document. All the information of an annotation to create/edit could be supplied using this format, which implies that the implementation of this extension is dependent of the DAS version implemented.
All the elements of this format are explained in the DAS 1.6 specification.
An example of the Document is below, and as it will be explained later the same format could be used for the input or the output of one of the HTTP methods.
Extra information can be required depending of the implementation as metadata of the operation. For those cases the element NOTE should be used in a notation KEY=VALUE. In the example below this notation is used to represent the OpenId of the user who sent or is sending this feature to the server.
<?xml version="1.0" standalone='no'?> <!DOCTYPE DASGFF SYSTEM "http://www.biodas.org/dtd/dasgff.dtd"> <DASGFF> <GFF version="1.0" href="http://www.ebi.ac.uk/das-srv/uniprot/das/uniprot/features?segment=P05067"> <SEGMENT id="P05067" start="1" stop="770" version="7dd43312cd29a262acdc0517230bc5ca"> <FEATURE id="UNIPROTKB_P05067_KEYWORD_Disease" label="Disease mutation"> <TYPE id="BS:01019" category="inferred by curator (ECO:0000001)">disease</TYPE> <METHOD id="UniProt">UniProt</METHOD> <START>10</START> <END>40</END> <SCORE>0.0</SCORE> <ORIENTATION>0</ORIENTATION> <PHASE>-</PHASE> <LINK href="http://www.uniprot.org/uniprot/P05067">http://www.uniprot.org/uniprot/P05067</LINK> <NOTE>Adding a new feature!</NOTE> <NOTE>USER=http://user.myopenid.com</NOTE> </FEATURE> </SEGMENT> </GFF> </DASGFF>
HTTP METHODS (DAS writeback methods)
POST
The HTTP method POST should be used to create a feature in the server. The writeback document should be sent to the server in a POST variable called content . The server should create an identifier for this feature. This id should be the URI formed by the concatenation of the URL of the writeback server and some unique number for the server(a consecutive number). The response should be based in the HTTP headers of the DAS specification. (i.e. 200 OK, 500 Server error, etc.). The content of the response should be a GFFDAS format with the information that was created in the database therefore if everything is correct the response will be almost the same that the input document. The only difference will be the metainformation added for the server as notes, for example
<NOTE>USER=http://user.myopenid.com</NOTE> <NOTE>VERSION=1</NOTE> <NOTE>DATE=2009-06-10 18:11:30.672644</NOTE>
PUT
The behavior of the method PUT is very similar to the described for POST. However in this case the writeback document is the content itsel of the request(i.e. Is not embedded in any parameter). An important difference is the management of the id, which is defined as the value in the field id of the element FEATURE if this value is a valid URI, otherwise is the URI formed by the base URL in the field href concatenated with the value in the field id of the element FEATURE. For example for the same writeback document of above the id in the element FEATURE will be:
<FEATURE id="http://www.ebi.ac.uk/das-srv/uniprot/das/uniprot/features/UNIPROTKB_P05067_KEYWORD_Disease" label="Disease mutation">
The response should follow the same rules as the POST method.
DELETE
The DELETE method doesn't require a writeback document for the input, because in order to DELETE a feature is just necessary to identify it. The identification of a feature will be reconstructed by its id and the id of the segment where it will be deleted. The openid is also require for this transaction. The format of the HTTP connection should be:
DELETE [server]?featureid=[featureid]&user=[openid]&segmentid=[segmentid]
A successful response SHOULD be 200 (OK) if the response includes an entity describing the status, 202 (Accepted) if the action has not yet been enacted, or 204 (No Content) if the action has been enacted but the response does not include an entity.
The content of the response should be in the same GFF format but the parameter label of the element FEATURE should be DELETED as in:
<FEATURE id="http://writeback/92" label="DELETED">
GET
All the commands of reading in the DAS specification should Apply here, however the deleted features should be also included here(in the same format as explained above) and is the client who decide how to used this information. There is an extra command that a writeback source should implement:
Historical Command
Retrieve all the versions that the writeback has for an specific feature.
Scope: Writeback servers.
Command: historical
Format:
PREFIX/das/DSN/historical?feature==FEATUREID
Description: This query returns all the versions for an specific feature embedded in its respective segment(s).
Arguments:
- feature (required; one)
- URL identifier of a particular feature in the writeback.
Here is an example of a valid request:
http://www.writeback.com/das/writeback/historical?feature=http://writeback/9
Response:
The document returned from the features request is an XML-formatted "DASGFF" document.
Format:
<?xml version="1.0" standalone="no"?> <!DOCTYPE DASGFF SYSTEM "http://www.biodas.org/dtd/dasgff.dtd"> <DASGFF> <GFF version="1.0" href="http://localhost:8080/MyDas/das/writeback/historical?feature=http://writeback/9"> <SEGMENT id="P05067" start="1" stop="770" version="7dd43312cd29a262acdc0517230bc5ca"> <FEATURE id="http://writeback/9" label="Disease mutation"> <TYPE id="BS:01019" category="inferred by curator (ECO:0000001)">disease</TYPE> <METHOD id="1">UniProt</METHOD> <START>143</START> <END>189</END> <SCORE>0.0</SCORE> <ORIENTATION>0</ORIENTATION> <PHASE>-</PHASE> <NOTE>testing note</NOTE> <NOTE>USER=http://user.myopenid.com</NOTE> <NOTE>VERSION=1</NOTE> <NOTE>DATE=2009-05-25 14:22:39.705735</NOTE> <LINK href="http://www.uniprot.org/uniprot/P05067">http://www.uniprot.org/uniprot/P05067</LINK> </FEATURE> <FEATURE id="http://writeback/9" label="DELETED"> <TYPE id="" category="" /> <METHOD /> <START>0</START> <END>0</END> <SCORE>0.0</SCORE> <ORIENTATION>0</ORIENTATION> <PHASE>-</PHASE> <NOTE>USER=http://user.myopenid.com</NOTE> <NOTE>VERSION=2</NOTE> <NOTE>DATE=2009-06-10 17:58:11.83588</NOTE> </FEATURE> </SEGMENT> </GFF> </DASGFF>
User Authentication
The control of who is authorized or not to made modifications in the writeback is a source implementation issue, however the recommendation to pass parameters is through the NOTE element, in the way [KEY]=[VALUE], creating as many NOTE elements as parameters are required for the particular implementation.
<NOTE>USER=login</NOTE> <NOTE>PASSWORD=keypass</NOTE>
Proposed Extensions
These extensions are merely proposals for future modifications, and do not yet have implementations.
DAS search
December 3,2010
This is a working document and a proposal for an extension to the DAS 1.6 specification in order to a mechanism for programmatic search of content within annotation servers.
This proposal pretends to provide the basis for an optional extension of the features command to support elaborated queries.
New Argument - query
query, a new argument for the features command should be added, so now the request of this command is defined as:
SERVER/das/DSN/features?segment=RANGE [;segment=RANGE] [;type=TYPE] [;type=TYPE] [;category=CATEGORY] [;category=CATEGORY] [;feature_id=ID] [;maxbins=BINS] [;query=DASQUERY]
Where DASQUERY is described below. In the case of a query contains simultaneously the attributes segment, feature_id and query. they should be treated as a disjunction of the filtered subsets (ie. the result features should pass the 3 conditions).
DAS Query Language
Based in Lucene query language. A query is broken into terms and operators:
- Terms: single words or phrases (group of words surrounded by quotes). E.g. polypeptide AND "alpha helix"
- Fields: used to search in a specific column. See the next section for the specific field names. E.g. type:membrane
- Term modifiers: wildcard searches, fuzzy searches, proximity and range searches. E.g. cur*
- Operands: OR (or space), AND, NOT, +, -. E.g. typeCvId:CV:00001 AND featureLabel:"one Feature"
- Grouping and field grouping: (typeCvId:CV:00001 AND featureLabel:"one Feature") OR typeId:twoFeatureTypeIdOne
The following table shows the available standard fields that can be used in DAS advanced searches:
Field Name | Searches on | Example |
---|---|---|
featureId | The Id of the feature | featureId:IPR003593_450_639 |
featureLabel | In case the feature has a label | featureLabel:"ABC transporter" |
segmentId | The Id of the segment | segmentId:P05701 |
segmentLabel | In case the segment has a label | segmentLabel:P53 |
segmentStart | Using the start coordinate of the segment | start:1000 |
segmentStop | Using the stop coordinate of the segment | stop:2000 |
typeId | Using the id of the type associated to the feature | typeId:SO\:0000417 |
typeCvId | Using(if available) the ontology id of the type associated to the feature | typeCvId:SO\:0000417 |
typeLabel | Using(if available) the label of the type associated to the feature | typeLabel:polypeptide_domain |
typeCategory | Using(if available) the category of the type associated to the feature | typeCategory:coverage |
type | Using any of the field related with the type associated to the feature(ie. typeId OR typeCvId OR typeLabel OR typeCategory) | type:SO\:0000417 |
methodId | Using the id of the method associated to the feature | methodId:ECO\:0000029 |
methodCvId | Using(if available) the ontology id of the method associated to the feature | methodCvId:ECO\:0000029 |
methodLabel | Using(if available) the label of the method associated to the feature | methodLabel:"inferred from InterPro motif similarity" |
method | Using any of the fields of the method associated to the feature | method:inferred |
start | Using the start coordinate of the feature | start:100 |
stop | Using the stop coordinate of the feature | stop:200 |
score | Using(if available) the score of the feature | score:0.5 |
orientation | Using(if available) the orientation of the feature | orientation:\+ |
phase | Using the phase of the feature | phase:1 |
note | Using(if available) any of the notes of the feature | note:200 |
link | Using(if available) any of the links of the feature | link:200 |
target | Using(if available) any of the targets of the feature | target:supercontig201 |
parent | Using(if available) any of the parents of the feature | parent:chromosome1 |
part | Using(if available) any of the parts of the feature | part:contig201a |
all | Using any of the above fields of the feature | all:chromosome |
Capability
A source able to deal with this extension have to reflect this in its sources command by a similar line as:
<capability type="das1:advanced-search" />
Implementation Notes
As a consequence of the first prototype of the advanced search capability in myDas, the capabilities entry_points and feature-by-id are requirements of the new capability in order to automatically create indexes to improve the search performance.
Support for alternative content formats
Content negotiation, either via request parameters or HTTP auto-negotiation. This would allow support for response formats other than DAS XML. Examples might include JSON, XHTML, etc.
Rationale
DAS XML is expressive but can be very verbose. This is particularly problematic for the features query, which depending on server and query can result in extremely large responses. It is therefore desirable to allow more efficient alternative content formats to be returned within the general DAS query framework, particularly for feature queries.
Another advantage is simplicity. If we allow a DAS server to send back other formats, then it can take advantage of libraries such as Picard or tools like samtools to run region queries on indexed flat files and return the data without having to parse and process them. All the server would have to do is decode the request URL and then use Picard or samtools to grab the data from the flat file and return it to the client.
Proposed Implementation
At the DAS workshop the following was discussed as an alternative to the initial proposal below. The advantage of this alternative is that it should work well for both UCSC style sources with tracks that are separated by type and tracks seperated by data source name (e.g. ensembl). Having the format request and response separate from the types request/response means we do not break the 1.6 spec. The format as in this example also allows data providers to allow different response formats for different commands. It has also been proposed that format names be put on this wiki so as not to get name clashes. Names also need to be specific in that for example "JSON" format really does not say anything about the way the data is set out it is similar to saying my data comes as a tab separated file - we would not know what the "columns" represent.
Can query a datasource for a list of the formats it supports for example ../das/hg18/format should result in something like the following:
<DASFORMAT> <COMMAND name="das1:features"> <FORMAT name="das-JSON"> ..if no types specified here then all types for this source have this format for this command <TYPE id="gene"/> <TYPE id="exon"/> </FORMAT> <FORMAT name="das-GoogleProtocolBuffers"> ..if no types specified here then all types for this source have this format for this command <TYPE id="gene"/> <TYPE id="exon"/> </FORMAT> </COMMAND> <COMMAND name="das1:entry_points"> <FORMAT name="das-JSON"> </FORMAT> </COMMAND> </DASFORMAT>
Initial proposal
Addition of an optional zero-or-more <FORMAT> element as a child of <TYPE> element in types command response, with required attributes "name" (arbitrary character string that uniquely identifies this format within the server) and "mimetype" (must follow standard mimetype identifier rules). Note that this strategy also allows for multiple formats with the same mimetype.
Addition of an optional "format" query parameter to features query. Format parameter SHOULD be a format name that the server recognizes. If server can return ALL features that satisfy the feature query in the specified format, it should do so, and set the response Content-Type header accordingly. If the server does not recognize the requested format, or cannot return in that format all of the features that satisfy the features query, it should return an HTTP error message (with X-DAS-Status also set?).
Both of these additions are optional, therefore these changes will not affect servers or clients that do not support alternative content formats.
Authentication
Per-user access control for DAS servers would be useful for federated distribution of private data, and would be necessary for the use of DAS in circumstances that are commercially sensitive or subject to patient privacy controls. Although authentication can theoretically be overlaid onto DAS, it has become apparent that a lack of an explicit recommendation for a single implementation strategy has prevented take-up by DAS clients and servers. As discussed at the 2010 DAS Workshop, there are two alternative proposals:
Standard HTTP Authentication
In this model, the DAS specification would simply specify that the existing HTTP authentication strategies (basic and digest) can be used in DAS, and that clients should be ready to handle these situations. This strategy is simple to adopt, and has a large number of existing implementation tools. However, clients and servers must all maintain independent implementations (with all of the regulatory privacy concerns) and users have separate login credentials for every DAS server.
Delegated Authentication
In a delegated authentication system, authentication is effectively "outsourced" to a third party by clients and servers, leaving servers only to deal with authorisation (i.e. servers maintain lists of users who are allowed access, but don't have to worry about passwords). The model is used by systems such as OpenID. However, OpenID itself is fundamentally tied to user-browser environments due to its use of HTTP redirects, and therefore cannot be used with DAS (not all DAS clients are browser based, and even browser-based clients use proxies due to technical limitations). Therefore, this model is a DAS-specific implementation of delegated authentication. The below summarises the steps in an authenticated DAS request:
- A DAS client asks the user for his/her "DAS logon" credentials. This would be an email address and password. The DAS client is free to do this in any mechanism, which will depend on its implementation. For example, web-based clients might use HTTPS, desktop clients can use a dialog box.
- The DAS client forwards the credentials over HTTPS to the DAS registry, which is acting as the 'credential provider'. The client also provides a list of servers it wishes to access. The location of the registry is predefined, as the client implicitly trusts the DAS Registry to confirm that the user is who he says he is.
- If the password is correct, the DAS registry responds with a token for the client to use in requests to each DAS server. These tokens allow the client to act on the user's behalf without divulging the user's password.
- The DAS client initiates a DAS request (e.g. for features) to the DAS server using HTTPS, and embeds the user's email address and token in the request.
- The DAS server forwards the email address, token and its own URL to the DAS registry (which it trusts), and asks if the token is valid for that user and server.
- If the token is valid, and the email address identifies someone who is authorised to access the DAS server, the DAS server returns the data to the DAS client
Some points about this:
- Tokens need to be tied to a specific DAS server, as otherwise an unscrupulous server could use a token received from a client to access data from another server.
- The registry must check email addresses. Otherwise, a user could register an email address they do not own in order to gain access to a server.
Comparison
HTTP | Delegated | |
---|---|---|
Credentials | The user must maintain usernames and passwords for each and every DAS server. | The user has a single "DAS password". |
DAS client trust | The user must trust a third party DAS client with his password. Unscrupulous clients can access a user's data from a single DAS source. | The user must trust a third party DAS client with his password. Unscrupulous clients can access all a user's data from any DAS source. |
Storing passwords | DAS servers, and DAS clients storing passwords between sessions, need to worry about secure password storage. | The DAS Registry, and DAS clients storing passwords between sessions, need to worry about secure password storage. Servers do not. |
Rescinding access | A user cannot rescind access (for example to a rogue client) without changing all their passwords. | Tokens could be cancellable, enabling the user to visit the DAS registry and rescind access privileges to a client at will. |
Authorisation | In addition to maintaining user accounts and passwords, DAS servers must also adopt their own mechanism for authorisation. | DAS servers must still adopt their own mechanism of authorisation by tying email addresses to internal user lists. |
Use of URIs for DAS identifiers
Often, the use of unique IDs within DAS is poorly executed. Formally adopting URIs as identifiers (including specifying how URIs are built from URI references within DAS XML documents) would allow cross referencing between sequences and annotations within and outside DAS.
Entry Points for annotation servers
January 28, 2010
Rationale
In DAS version 1.6, the entry_points command is required for reference servers and optional for annotation servers.
However, it is difficult for annotation servers to support this command because the start/stop attributes of the SEGMENT element are mandatory. In contrast to reference servers, annotation servers very rarely know the length of each entry point and therefore cannot satisfy this requirement. This is a pity because it'd be useful for clients if annotation servers were able to provide a list of IDs for the segments it has annotated. This would be very easy to implement because every server always knows the IDs of the segments it has annotated, and needs this information in order to support the unknown-segment capability.
Required Change
Allowing annotation servers to easily list their entry points would require amending the specification to say something like:
- reference servers must always list all possible segments with their start/stop positions
- annotation servers implementing entry_points must list only the segments they have annotated, and start/stop are optional