Category Archives: Java

X-SAKAI-TOKEN Authentication

I would like to draw your attention to some important changes in plumbing that is used in the hybrid integration between Sakai 2 and Sakai 3. First a little background… Previously, an authentication mechanism existed that allowed a system external to Sakai 2 to call into its services while not prompting the user for authentication and trusting the external system for the user’s identity. This has been used primarily to allow services in Sakai 3 to call services in Sakai 2 to retrieve, for example, lists of sites for which the user is a member. This kind of authentication should not be confused with a single-sign-on solution like CAS which casts the end user as the primary actor. This type of authentication mechanism is used for server-to-server communication, on behalf of the user, and is established by system administrators who declare trust relationships between the two systems. The basic flow looked like:

  1. End user requests data from Sakai 3 via HTTP GET method (e.g. to get a set of Sakai 2 sites for the current user).
  2. Request is proxied through Sakai 3.
  3. Sakai 3 adds a header to the request: X-SAKAI-TOKEN (includes username).
  4. Sakai 2 receives the request and examines the X-SAKAI-TOKEN.
  5. If the token is valid, establish a new session based on the username passed in the token, and process the request.

So let’s examine this a little closer… An example X-SAKAI-TOKEN looks like:

EEFSxb/coHvGM+69RhmfAlXJ9J0=;admin;1273688664630.

The token is comprised of three pieces of information separated by semicolons:

  1. EEFSxb/coHvGM+69RhmfAlXJ9J0= : A cryptographically secure hash of the message.
  2. admin : the username of the current user signed into the calling system, i.e. Sakai 3 in this example.
  3. 1273688664630 : Number of milliseconds since the epoch.

Inquiring minds are probably wondering why Sakai 2 would trust such a token… I mean, just because a web request shows up at your front door and has some official looking identification, does not mean we should allow them to run code as admin, right? Well that is where the next concept is introduced: the shared secret. Think of a shared secret as a password that both systems know. Sakai 3 used the shared secret to sign the message and since Sakai 2 also has the shared secret, it can verify the integrity of the message (i.e. ensuring the message has not been spoofed or tampered with in any way). Just to be clear, the message admin;1273688664630 is signed using the shared secret to create the hash EEFSxb/coHvGM+69RhmfAlXJ9J0= and then the hash is prepended to the message to create the token X-SAKAI-TOKEN: EEFSxb/coHvGM+69RhmfAlXJ9J0=;admin;1273688664630. Now Sakai 2 has everything it needs to verify the token. It can use it’s own shared secret to compute a hash of the message and verify it equals the hash that was passed in the token. If the hashes are equal, the message is valid and has not been tampered with in any way.

This capability is great for calling into Sakai 2. But lately I have started looking into how tools or widgets might start passing results between systems. Think about a student submitting an assignment to a Sakai 2 tool and having the results posted to a Gradebook running in Sakai 3.  We needed a way for Sakai 2 to be able to call into Sakai 3 using the X-SAKAI-TOKEN mechanism — and now we have it! Thanks to some help from Dr. Ian Boston, we have pulled the same authentication and identification techniques into Sakai 3; with some improvements of course:

  1. The HTTP header X-SAKAI-TOKEN has been renamed x-sakai-token (to save on your CAPS LOCK key).
  2. The signature has been upgraded to the latest and greatest standards: RFC 2104 compliant HMAC (Hash-based Message Authentication Code). Thanks to Carl Hall for adding this capability to Sakai 3.
  3. sakai.auth.trusted.server.enabled: setting to completely enable or disable this feature.
  4. sakai.auth.trusted.server.safe-hosts: setting to control which other servers we trust.
  5. The Sakai 2 implementation has also been updated with the same great improvements.

So now that we have some plumbing installed, I will return to considering results passing between Sakai 2 and Sakai 3. 🙂

PS – This same mechanism could be used with 3rd party systems if desired. Build a message, sign it with an HMAC, and stuff it into an HTTP x-sakai-token header and away you go… Think about it…

Advertisements

2 Comments

Filed under Java, Sakai, Technology

Importing content from Sakai 2 into Sakai 3 (take 2)

Since returning from holiday, I have rejoined the matter of Importing content from Sakai 2 into Sakai 3. The first order of business was to refactor the XML parsing from SAX to StAX to deal with some potentially nasty classloader issues as suggested by Dr. Ian Boston. That went smoothly and I have to say that after using SAX, StAX is a much improved utility as you have more control and can pull events from the stream rather than having them pushed to you. This makes for more natural and supportable java code.

Next, there were some improvements that I wanted to make to the import code:

  1. Support for org.sakaiproject.content.types.urlResource types.
  2. Adding the metadata to the imported content.

After a week of plugging away on both fronts, some good progress has been made. First, a model conversion had to be considered for org.sakaiproject.content.types.urlResource types. In Sakai 2, these URL resources are simply presented in the UI as hyperlinks that open in a new window. Given the RESTful nature of Kernel2 (K2), I needed to decide how to best represent a hyperlink. My first thought was using the proxy capabilties of K2, but that presented some issues as proxy nodes must be stored under /var/proxy and the whole notion of proxying http requests has security implications – that is why K2 does not allow just anyone to create a proxy node.

I was probably too close to the problem and had trouble seeing the more obvious solution – why not use a http redirect? After noodling the problem for a while, the simpler solution finally entered my brain. After a bit of acking, I found that Sling already has support for redirects through its RedirectServlet which binds to sling:resourceType=sling:redirect. So then, it was just a fairly simple matter of creating a node, and setting the properties accordingly:

{
"sling:resourceType":"sling:redirect",
"sling:target":"http://sakaiproject.org/",
"sakai:id":"AirbV1U-",
"sakai:user":"admin",
"jcr:mimeType":"text/url",
"sakai:filename":"http://sakaiproject.org/",
"jcr:created":"Wed Oct 07 2009 13:53:00 GMT-0400",
"jcr:lastModified":"Wed Oct 07 2009 13:53:00 GMT-0400",
"jcr:primaryType":"nt:unstructured"
}

That was pretty much it for org.sakaiproject.content.types.urlResource types; the redirect works as expected. There are still a couple of things I would like to improve in this area:

  1. The node names for these urlResources need to be beautified. As the resource name comes through in the content.xml, it looks like “http:__sakaiproject.org_”. I have to strip out the “:” to avoid a JCR exception, so the node name currently looks like “http__sakaiproject.org_”. Ideally, it would match the display name, i.e. “http://sakaiproject.org/”. Perhaps some manner of escaping invalid characters might work, but further digging into the JCR is required. I am able to set “sakai:filename”:”http://sakaiproject.org/”, so maybe that is good enough; TBD.
  2. Since the jcr:primaryType==nt:unstructured, the URL is rendered as a folder when connected via WebDAV. It would be nice to get these URLs to render as a leaf node instead. I experimented with jcr:primaryType=nt:file, but ran into some roadblocks and backed off.

Regarding the mapping of metadata, that task proved to be mostly straightforward for the fields that have a one-to-one mapping. However, there are currently more supported metadata fields in Sakai2 than there are in Sakai3. There is no limitation on the number or type of metadata fields that can be stored in Sakai3, so I am considering just storing all of fields from Sakai2 just as a precaution and possible future-proofing. I am left wondering whether to store them with their current keys or to prepend something like “sakai2:” to all of the keys before storing them.

Looking towards the near term, I am likely to look into the following issues:

  1. All user uploaded content is currently stored in a BigStore under /_user/files. After discussing with Ian Boston, I will most likely refactor the import code to store its content in that BigStore as well. Although, the BigStore concept will likely be redesigned in the near future, so any work I do in this area will be nicely abstracted so that this behavior can be changed easily when and if BigStore is redesigned.
  2. With the move to BigStore, I will have to take a look at access control lists (ACLs) so the the user importing content will have the proper permissions.
  3. Next, I need to take a look at the contract between K2 and the “Content & Media” widget so that the imported content appears properly within the user interface.
  4. What about other content types that could be imported today? Content from the Forums tool may be a good candidate as K2 currently has support for threaded discussions. Chat might be another place to look… Other ideas?

Regarding the Sakai2+3 Hybrid mode, I have hopes to arrange a two day coding sprint with Dr. Chuck Severance and Noah Botimer to develop a BasicLTI consumer for Sakai3. This would allow us to easily place a Sakai2 tool within the context of a Sakai3 site. With any luck, we will get this sprint organized by the end of January. Until next time, L

1 Comment

Filed under Java, Sakai, Technology

Importing content from Sakai 2 into Sakai 3 (take 1)

Development was starting to slow down for me on the Sakai2+3 Hybrid Mode, so I needed to turn my primary focus elsewhere. Michael Korcuska and I had decided previously that the next focus point would be to develop a working prototype that would allow someone to take a zip file exported from Sakai 2’s Site Archive tool and import the content into Sakai 3. Initially the scope would be limited to just the content contained within the Resources tool (a.k.a. ContentHostingService) since Sakai 3 currently has enough functionality to support the files and folders model.

When I started down this path, I did not expect to reach a stopping point by the end of the week. Frankly I thought it would take longer. But after a couple of days, I had the logic around parsing the content.xml file and extracting the content into my local file system working pretty well. The next couple of days were spent porting this working code into Kernel2 as a SlingServlet and creating a RESTful web service. After a couple of bumps in the road and someone moving my cheese, I am pleased to say that the first iteration of this work is complete.

As an example, you can take the sample archive.zip file which came from a Sakai 2 test instance, and upload it to Sakai 3:

curl -F"path=/site/import/folder" -F"Filedata=@archive.zip" http://username:password@localhost:8080/foo.sitearchive.json

The web service expects two parameters:

  1. path: The path to a folder where you want the content imported.
  2. Filedata: one or more zip files to import.

The result will be a folder that looks like the following screen shot:

While there is still much to be done (e.g. mapping file meta-data, support for more resource types, etc.), this is an important first step. For one, it demonstrates technical feasibility. Secondly, it creates the beginnings of a framework that can be extended to support importing other Sakai 2 tools, and eventually other import formats entirely like IMS Common Cartridge. If you are interested in looking at the code, it can be found at my github repository.

Looking forward, I will likely begin investigating IMS Basic LTI as a mechanism to enhance the Sakai 2+3 Hybrid capabilities. Currently, the hybrid mode supports entire sites (i.e. the user chooses to enter either a Sakai 3 site or a Sakai 2 site via the Sakai 3 portal). Ideally, one should be able to mix and match tools from either Sakai 2 or 3 in a Sakai 3 site. Dr. Chuck has done some good work in this area – Sakai 2.7.0 will have both a BasicLTI consumer and producer. So theoretically, if Sakai 3 had a BasicLTI consumer, it could present a Sakai 2 tool to a user as a Sakai 3 widget. My hopes are that among Dr. Chuck, Noah Botimer, and myself that we could turn out a Sakai 3 LTI consumer relatively quickly. More to come in the new year. Best regards, L

7 Comments

Filed under Java, Sakai

maven2 bash completion complete

I have been utterly spoiled by bash completion when using svn and git for the past few months – the only thing that was missing was maven completion.  Since I could not sleep this morning, I set out to fix that.  First, a little bit of background.  I have been using MacPorts to install both subversion and git.  Both had a variant “+bash_completion” – I did not know what it did at the time, but it sounded cool so I included that variant when I installed them.

git-core @1.6.5.3_0+bash_completion+doc
subversion @1.6.5_0+bash_completion+no_bdb
For example: sudo port install git-core +bash_completion +doc

After digging a bit further, I figured out that I need to add the following lines to ~/.profile to get bash_completion to take off:

if [ -f /opt/local/etc/bash_completion ]; then
. /opt/local/etc/bash_completion
fi

On the surface you might think that completion might only be aware of common command line arguments to svn and git binaries, but they are actually a little smarter. For example, in my git repository typing “git checkout <TAB>” will list all of the branches in the repository! Very handy!

So now, how to get maven2 commands into bash completion. I started with the first Google hit: Guide to Maven 2.x auto completion using BASH. That worked, but it was missing a lot of the commands I wanted easy access to and it was not obvious to me how to extend their script.  Next, Google lead me to another hit: Maven Tab Auto Completion in Bash. This script had more completions out of the box and it was obvious how to add more. With some quick hacking, my /opt/local/etc/bash_completion.d/m2 now looks like:

# Bash Maven2 completion
#
_mvn()
{
local cmds cur colonprefixes
cmds="clean validate compile test package integration-test \
verify install deploy test-compile site generate-sources \
process-sources generate-resources process-resources \
eclipse:eclipse eclipse:add-maven-repo eclipse:clean \
idea:idea -DartifactId= -DgroupId= -Dmaven.test.skip=true \
-Declipse.workspace= -DarchetypeArtifactId= \
netbeans-freeform:generate-netbeans-project \
tomcat:run tomcat:run-war tomcat:deploy \
sakai:deploy -Predeploy \
dependency:analyze dependency:resolve \
versions:display-dependency-updates versions:display-plugin-updates \
javadoc:aggregate javadoc:aggregate-jar \
source:aggregate"
COMPREPLY=()
cur=${COMP_WORDS[COMP_CWORD]}
# Work-around bash_completion issue where bash interprets a colon
# as a separator.
# Work-around borrowed from the darcs work-around for the same
# issue.
colonprefixes=${cur%"${cur##*:}"}
COMPREPLY=( $(compgen -W '$cmds' -- $cur))
local i=${#COMPREPLY[*]}
while [ $((--i)) -ge 0 ]; do
COMPREPLY[$i]=${COMPREPLY[$i]#"$colonprefixes"}
done
return 0
} &&
complete -F _mvn mvn

You will notice that I have added the common Sakai goals like sakai:deploy or -Predeploy. I have also added some other maven plugins that I find useful. Give it a try: “mvn <TAB><TAB>” or maybe “mvn sak<TAB>” or how about “mvn ecl<TAB>”. I hope you will find bash completion just as satisfying as I do.  Best, L

2 Comments

Filed under Java, Technology

Investigating site exports from Sakai 2

So the export/import file format investigation has reached some early conclusions regarding the use of Moodle’s backup schema and it looks like we will be looking elsewhere. See: Moodle export-import format investigation and the email thread itself.

While we ponder IMS Commmon Cartridge, I thought I would investigate what it would take to provide the capability of exporting Sakai 2 sites into the existing Sakai 2 proprietary XML format. This is a long standing request within the Sakai community, but one that no one has been willing to tackle. This is a bit of a dodgy situation as most tools do participate in the method EntityTransferrer.transferCopyEntities(), so it is possible to copy the structure of a site from semester to semester. I use the term “structure” because this is common practice among LMS applications to only copy what might be termed a “template” across semesters. For example, this copy process would include content like forum definitions, but not student responses; grade book items, but not student grades, etc.  The primary use case is an instructor who taught a class last semester can import that previous site into the current semester’s course site to reduce setup time.

So far so good – but here is where things get a bit dodgy… The EntityTransferrer.transferCopyEntities() method copies entities directly from one site to another (i.e. without writing any of these entities to XML). While Sakai 2 does have a mechanism for writing entities to XML, called ArchiveService.archive(),there are at least two problems with it: 1) Unlike transferCopyEntities(), all student positngs, grades, etc. are included in the XML produced (i.e. more like  a site backup), and 2) only a small subset of tools actually implement the ArchiveService.archive() interface! So this leaves me wondering:

  1. Does anyone actually depend on ArchiveService.archive()? My instincts tell me no since most of the tools do not implement it. Am I wrong?
  2. Could we usurp the ArchiveService.archive() interface and change the behavior so that only site structure is exported without student content?
  3. Do we leave ArchiveService.archive() alone and create a new API?
  4. How many tools still need to implement archive()?


1 Comment

Filed under Java, Sakai

Eclipse 101 – Basics, Tips & Tricks Screencast

I am pleased to share with the community almost two hours of developer training that was taped before a live audience – the Oncourse developers.  🙂  The team has started a biweekly training regimen and oddly enough I was elected to deliver the first session.  After a few code review sessions, it became clear that not everyone was familiar with some of the basic capabilities of Eclipse.

This screencast attempts to fill in some of those knowledge gaps.  I think you will find that this screencast builds on some of the work that Zach Thomas has done in this space.  Zach did an excellent job of covering the setup and configuration of the environment.  While this is no opus magnum, I hope it finds you well and I would appreciate your honest feedback.  Best, L

4 Comments

Filed under Java, Sakai, Technology

TSS: Easily manage license headers of your source files with Maven

From TSS: http://www.theserverside.com/news/thread.tss?thread_id=48526Licensing source code can be rough, especially if you’re changing licenses, or adding license references to code that’s already been written or generated.Modifying licenses is quite time expensive and a developer doesn’t necessarily want to spend his or her time managing headers on source files.Searching on the Internet, I found only these tools relative to license headers:

  • Release Audit Tool
  • The Maven 2 plugin for RAT
  • Checkstyle
  • Another one (which I do not remember the name) that is just a command-line tool[Editor’s note: I tried to find acommand-line tool for this, to no avail. Anyone who would like to offer pointers is welcomed to do so.]

These tools lack features. Since I use Maven as a project managementtool, I wanted to have a Maven 2 plugin capable of checking if the license headers are present, in the verify phase, and of course I wanted the ability to add or update these headers. Therefore I wrote a Maven 2 license plugin available at http://code.google.com/p/maven-license-plugin/ that anyone can use in a POM like this…

Leave a comment

Filed under Java, Sakai, Technology