Wednesday, December 31, 2014

Representing Your Local Maven Repository Structure in Neo4J Graph Database

Below you will find a way to use Neo4J graph database as a storage of metadata from your local Maven repository.

If you have Maven to manage your project dependencies, then after a while you might get a quite sizable local Maven cache. Maven keeps there a copies of all jars being downloaded from the remote repositories (or being installed by your local builds). Usually it is located in your home folder, under the hidden .m2 directory. For example, my local Maven repository size is ~8GB, and honestly saying I don't remember when the last time I cleaned it up.

Maven-based projects configured to get the external artifacts (jars, zip, war files, etc.) necessary for a build by looking for a set of <dependency> nodes in the project pom.xml file. Which means you can find out which artifacts in your local Maven repository are still in use and which ones are potential waste by simply examining all your pom.xml files. Note that I am talking here about direct dependencies only, not a dependency-on-dependency chains, which Maven is also capable to handle. If we want to investigate a whole chain, we need to get recursively to each of the pom files and its dependencies.

We all know that a local storage is not very expensive this days, so we will not gain much benefit by keeping an eye on how Maven uses our local artifacts and periodically wipe out the old ones. It is always cheaper to destroy the local cache completely and let maven to download all it needs automatically during the next build. However, having an automated way of resolving relations between your active projects and your local maven cache might be interesting from several others points of view. For example software audit (if any of your commercial projects use non-licensed code), or some code analysis (what versions of a particular dependencies are used). The Maven own Dependency plugin has some of this functionality, it can build a rudimentary trees and provide the dependency information in one particular project or group of related projects. But I can imagine a situation when you get a vulnerability report about one specific version of a dependency, and you would like to find out quickly where exactly it is used. This can be especially helpful if your organization uses a centralized Maven repository like Artifactory or Sonatype, and you can quickly poke into it's local repository cache.

There might be another reasons why I came up with this crazy idea to keep Maven repository information in graph database, so to stop speculations I will only add that the Neo4J is a great tool, and the task itself is very familiar to the "Digital Asst Management" use case, enough said.

A couple of word about implementation. Let's consider that you have a Neo4J database up and running somewhere, and you know a URL to it's REST endpoint. On my machine it is http://127.0.0.1:7474/db/data/

GitHub repository with source code: https://github.com/rokhmanov/repo-graph-maven

The application itself is a command-line tool, which first scans your local Maven repository for *.pom files, excludes the SNAPSHOTs, and then process the files one-by-one. We are particularly interested only in a subsection of pom file, in particular in <artifactId>, <groupId> and <version> nodes. Another point of interest is a <dependencies> node along with all its child nodes. This chunks of XML gets unmarshalled into java objects, see the corresponding Dependency.java and Project.java source files under com.rokhmanov.graph.sample.entity package. The last step is a graph creation - Neo4J RESTful API gets called by Jersey java client. Nothing really complex. This is a command line syntax:

usage: repo-graph-maven.xxxx.jar [OPTION]...
Options:
    <directory> - (mandatory parameter) path to the local Maven repository.
    <serverURL> - (mandatory parameter) Neo4j REST server root URL.
    'clear' - (optional parameter) The existing database will be recreated if specified.
   
Example: java -jar repo-graph-maven.jar ~/.m2/repository http://192.168.0.1:7474/db/data/ clear


Structure of Neo4J database: a single Root node "keeps" several links to Project nodes, each of them "has" links to Artifact nodes. So the graph schema is also very simple: "Root", "Project" and "Artifact" are nodes, and "keeps" and "has" are vertices.

[Root]--keeps-->[Project]--has-->[Artifact]

Note that some Projects might use the same Artifacts, for example the same version of  jUnit library. The application handles this scenario, and graph includes a cycles, like the one on a screenshot below:


If you have a large repository, the initial run might take a several minutes. Each subsequent run will add only new projects and artifacts so it will be shorter. Keep in mind that if you supply the "clear" parameter to the application, the whole Neo4J graph will be repopulated from scratch.

After successful execution, we can finally play with the graph. Below are sample Cypher queries along with their results:

1. What projects use the "commons-io" artifact?

MATCH p=(b:Component)-->(c:Artifact)
WHERE c.artifactId='commons-io'
RETURN b.artifactId, b.version, b.groupId

Result:

2.  What versions of "commons-io" artifact are used overall?

MATCH (n:Artifact)
WHERE n.artifactId =~ '.*commons-io.*'
RETURN n.artifactId, n.groupId, n.version

Result:




3. What are the 10 the most used artifacts?

MATCH (n:Artifact)<-[r]-(x)
RETURN n.artifactId, n.groupId, n.version, COUNT(r)
ORDER BY COUNT(r) DESC
LIMIT 10

Result:


Feel free to run all this queries by yourself using Neo4J Browser. Things which I think can be improved or made differently:
  • Implement variable substitution. For example, the ${project.version} can be replaced by the actual value from <parent> section in pom file;
  • Delete old nodes and edges from graph if the corresponding project is not exist anymore;
  • Implement batch REST calls to improve performance.
Overall this application has a POC quality and serves its needs. The task of course can be achieved by using a regular "relational" approach, the amount of data and the number of joins in the database will not be very large with just two objects and the graph structure like the one above. But nothing prevents us from adding more complexity to the graph in future, like a size of each artifact on filesystem, or information about its internal structure or license used. Keep also in mind that altering the schema (or graph) does not require to bring the database down, like when you alter a schema in a regular relational DB. This might be an important point when considering a solution line that in Production environment.

4 comments:

  1. Very interesting post.

    I am looking to do something to track the common libraries in my company. The idea is that when a library is updated I can query the graph to determine its downstream dependencies, and then build each one in turn using a CI tool such as Jenkins. This way I can spot breaking changes immediately.

    ReplyDelete
  2. Sion, thank you for your comment.
    As I mentioned above, it is all depends on the situation. The graph database can be "too much" in some rudimentary cases, and for the small maven-based projects the "mvn dependency:tree" would be just fine. The basic Dependency Report is also available from a "code-quality" tools (like SonarQube), such tools can be included in the build process very easily.
    If you have a hierarchy of projects, and it is too complex for Maven itself to resolve (and Maven is not bad in this job), then definitely yes - a custom tool is the best, and graph database might help. I will be glad to know about your approach and if my solution has been useful. Feel free to use my code, let me know if something is unclear or you need any help.

    ReplyDelete
    Replies
    1. Andriy, thanks for the fast reply. The scenario that I was looking to fix is described very well by Hans Dockter,
      here. Its a long video, but captures the essence of my problem. A problem statement might be: "I need to know everything that is dependent on me, so that I can rebuild them and make sure I don't break them"

      Delete
  3. Yes, Sion, thanks for the video. I got your point, from the dependencies point of view you need to verify your "consumers", not your "suppliers". I believe you are on a right track.

    ReplyDelete

Note: Only a member of this blog may post a comment.