Friday, July 11, 2014

Develop, test and deploy standalone apps on CloudBees

CloudBees is a cloud platform providing repository, CI service (Jenkins) and server for your apps. So everything you need to develop, test and deploy. There are many options, e.g. repository can be Git or SVN, for server you can choose Jetty, Tomcat, Glassfish, JBoss, Wildfly etc. It is also possible to run standalone applications, which are provided with port number, so you can start your own server. And that’s the case we’ll cover here.

spray.io is Scala framework for web apps. It allows you to create standalone web-apps (starting their own server, spray-can) or somewhat limited .war ones (spray-servlet), which you can deploy on JEE server like Glassfish, JBoss etc. We are going to use standalone here.

You can clone the app from github Let’s take a quick look at it now.

The app

Boot

The Boot file is Scala App, so it’s like java class with main method. It’s runnable. It creates Service actor, which is handling all the HTTP requests. It also reads port number from app.port system property and binds the service to the host and port. app.port is provided by CloudBees, if you want to run the app locally, you need to set it e.g. by jvm command line -Dapp.port=8080.

Service

Service has MyService trait, which handles routing to empty path only. Yes, the app is not very complicated ;)

Buildfile

build.gradle file is a bit more interesting. Let’s start from it’s end.

  • mainClassName attribute is set to Scala App. This is the class that is going to be run when you run it locally from command line by gradlew run.
  • applicationDefaultJvmArgs is set to -Dapp.port=8080 and it’s also necessery for running locally from gradle. This way we set port which Service is going to be bound to.
  • jar.archiveName is a setting used to set generated .jar name. Without it it’s dependent on the project directory name.

You can run the application by issuing gradlew run (make sure gradlew file is executable). When it’s running, you can point your browser to http://localhost:8080 and you should see “Say hello to spray-routing on spray-can!” Nothing fancy, sorry.

There is also “cb” task definde for gradle. If you issue gradlew cb, it builds zip file, with all the dependency .jars, and szjug-sprayapp-1.0.jar in it’s root. This layout is necessary for CloudBees stand alone apps.

Deploy to CloudBees

First you need to create an account on CloudBees. If you have one, download CloudBees SDK - so you can run commands from your command line. On Mac, I prefer brew install, but you are free to choose your way.

When installed, run bees command. When run for the first time, it asks your login/password, so you don’t need to provide it every time you want to use bees.

Build .zip we’ll deploy to the cloud. Go into the app directory (szjug-sprayapp) and issue gradlew cb command. This command not only creates the .zip file, it also prints .jars list useful to pass to bees command as classpath.

Deploy the application with the following command run from szjug-sprayapp directory:

bees app:deploy -a spray-can -t java -R class=pl.szjug.sprayapp.Boot -R classpath=spray-can-1.3.1.jar:spray-routing-1.3.1.jar:spray-testkit-1.3.1.jar:akka-actor_2.10-2.3.2.jar:spray-io-1.3.1.jar:spray-http-1.3.1.jar:spray-util-1.3.1.jar:scala-library-2.10.3.jar:spray-httpx-1.3.1.jar:shapeless_2.10-1.2.4.jar:akka-testkit_2.10-2.3.0.jar:config-1.2.0.jar:parboiled-scala_2.10-1.1.6.jar:mimepull-1.9.4.jar:parboiled-core-1.1.6.jar:szjug-sprayapp-1.0.jar build/distributions/szjug-sprayapp-1.0.zip

And here abbreviated version for readability:

bees app:deploy -a spray-can -t java -R class=pl.szjug.sprayapp.Boot -R classpath=...:szjug-sprayapp-1.0.jar build/distributions/szjug-sprayapp-1.0.zip

spray-can is an application name, -t java is application type. -R are CloudBees properties, like class to run and classpath to use. Files for classpath are helpfully printed when gradle runs cb task, so you just need to copy & paste.

And that’s it! Our application is running on the CloudBees server. It’s accessible at the URL from CloudBees console.
enter image description here

Use CloudBees services

The app is deployed on CloudBees, but is that all? As I mentioned we could also use git repository and Jenkins. Let’s do it now.

Repository (Git)

Create new git repository on your CloudBees account. Choose “Repos” on the left, “Add Repository”… it’s all pretty straightforward.
enter image description here

Name it “szjug-app-repo” and remember it should be Git.

enter image description here

Next add this repository as remote one to your local git repo. On the repositories page on your CloudBees console there is very helpful cheetsheet about how to do it.

First add git remote repository. Let’s name it cb

git remote add cb ssh://git@git.cloudbees.com/pawelstawicki/szjug-app-repo.git

Then push your commits there:

git push cb master

Now you have your code on CloudBees.

CI build server (Jenkins)

It’s time to configure the app build on CI server. Go to “Builds”. This is where Jenkins lives. Create new “free-style” job.

enter image description here

enter image description here

Set your git repository to the job, so that Jenkins checks out always fresh code version. You’ll need the repository URL. You can take it from “Repos” page.

enter image description here

Set the URL here:

enter image description here

Next thing to set up is gradle task. Add next build step of type “Invoke gradle script”. Select “Use Gradle Wrapper” - this way you can use gradle version provided with the project. Set “cb” as the gradle task to run.

enter image description here

Well, that’s all you need to have the app built. But we want to deploy it, don’t we? Add post-build action “Deploy applications”. Enter Application ID (spray-can in our case, region should change automatically). This way we tell Jenkins where to deploy. It also needs to know what to deploy. Enter build/distributions/szjug-app-job-*.zip as “Application file”.

enter image description here

Because you deployed the application earlier from the command line, settings like application type, main class, classpath etc. are already there and you don’t need to provide it again.

It might also be useful to keep the zip file from each build, so we can archive it. Just add post-build action “Archive the artifacts” and set the same zip file.

enter image description here

Ok, that’s all for build configuration on Jenkins. Now you can hit “Build now” link and the build should be added to the queue. When it is finished, you can see the logs, status etc. But what’s more important, the application should be deployed and accessible to the whole world. You can now change something in it, hit “Build now” and after it’s finished, check if the changes are applied.

Tests

Probably you also noticed there is a test attached. You can run it by gradlew test. It’s specs2 test, with trait MyService so we have access to myRoute, and Specs2RouteTest so we have access to spray.io testing facilities.

@RunWith(classOf[JUnitRunner]) is necessary to run tests in gradle.

Now when we have tests, we’d like to see tests results. That’s another post-build step in Jenkins. Press “Add post-build action” -> “Publish JUnit test result report”.

Gradle doesn’t put test results where maven does, so you’ll need to specify the location of report files.

enter image description here

When it’s done, next build should show test results.

Trigger build job

You now have build job able to build, test and deploy the application. However, this build is going to run only when you run it by hand. Let’s make it run every day, and after every change pushed to the repository.

enter image description here

Summary

So now you have everything necessary to develop an app. Git repository, continous integration build system, and infrastructure to deploy the app to (actually, also continously).

Think of your own app, and… happy devopsing ;)

Sunday, May 25, 2014

Validation of parameter passed to mock call in Spock

If you use Spock, sometimes you want to check mock method call. You can of course do it like this:
Problem here is that when the parameter passed to the method is wrong, you won't know what is wrong with it:
Too few invocations for:

1 * service.method({
            it.firstname == "Peter" && it.surname == "Stawicki"
        })   (0 invocations)

Unmatched invocations (ordered by similarity):

1 * service.method(eu.vegasoft.spocktest.validateparam.Person@40f892a4)
Sometimes we want to validate also parameter passed to the mock method. We can do it like this:
Now when the passed parameter does not match, we can see exactly what was wrong with it:
Condition not satisfied:

p.firstname == "Peter"
| |         |
| Paweł     false
|           3 differences (40% similarity)
|           P(aw)e(ł)
|           P(et)e(r)
eu.vegasoft.spocktest.validateparam.Person@34b23d12

Thursday, April 24, 2014

Do not underestimate the power of the fun

Do you like your tools?

Are you working with the technology, programming language and tools that you like? Are you having fun working with it?

When a new project starts, the company has to decide what technologies, frameworks and tools will be used to develop it. Most common sense factor to take into consideration is the tool's ability to get the job done. However, especially in Java world, usually there is more than one tool  able to pass this test. Well, usually there are tens, if not hundreds of them. So another factors have to be used.

The next important and also quite obvious one is how easy the tool is to use, and how fast can we get the job done with it. "Easy" is subjective, and "fast" depends strongly on the tool itself and the environment it is used in. Like the tool's learning curve or the developers knowledge of it.

While the developers knowledge of the tool usually is taken into account, their desire to work with it (or not), usually is not. Here I would like to convince you that it is really important too.

Known != best

There are cases where it's better to choose cool tools instead of known ones. Yes, the developers need to learn it, and it obviously costs some time, but I believe it is an investment that pays off later. Especially if alternatives are the ones that the devs are experienced with, but don't want to use any more. Probably there are some people who like to code in the same language and use the same frameworks for 10 years, but I don't know many of them. Most of the coders I know like to learn new languages, use new frameworks, tools and libs. Sadly, some of them can't do it because of corporate policies, customer's requirements or other restrictions.

Why do I believe such an investment pays off? If you think developer writes 800 LOC/day, so 100 LOC/hour, so 10 LOC/minute... well, you're wrong. Developers are not machines working with constant speed 9 to 5. Sometimes we are "in the zone", coding like crazy (let's leave the code quality aside), sometimes we are creative, working with pen and paper, inventing clever solutions, algorithms etc. and sometimes we are just bored, forcing ourselves to put 15th form on the page or write boilerplate code.



The power of fun

Now ask yourself, in which situation you (or your developers) usually find themselves? And if you are often bored, working 5th year with the same technology and tools, think about the times when you were learning it. Remember when you were using it for the first time? Were you bored then? Or rather excited? Were you less productive? That's truism, but we are not productive when we need to force ourselves to work. Maybe it's a good idea to change your work to be more fun? Use some tools you don't know (yet), but really want to try? It might seem you are going to be less productive, at least at the beginning, but is it really true? Moreover, if it allows you to write less boilerplate code or closures or anything else that can make you faster and more efficient in the long run, it seems a really good investment.

There is one more advantage of cool and fun tools. If you are a company owner, do you want your business partners to consider your company expensive but very good, delivering high quality services and worth the price, or not-so-good but cheap? I don't know any software company that wants the latter. We all want to be good - and earn more, but well deserved, money. Now think about good and best developers - where do they go? Do they choose companies where they have to work with old, boring tools and frameworks? Even when you pay them much, the best devs are not motivated by the money. Probably you know it already. Good devs are the ones that like to learn and discover new stuff. There is no better way to learn new stuff other then working with it. And there are not many things that are as fun for a geek as working with languages, technologies and tools they like.




So, when choosing tools for your next project, take fun factor into account. Or even better - let the developers make the choice.

--
This presentation might be interesting: http://www.infoq.com/presentations/Scala-Guardian It's Graham Tackley story about how they introduced Scala in Guardian, and what happened then.

Cool image courtesy of Łukasz Żuchowski http://blog.zuchos.com

Tuesday, January 22, 2013

Spock testing framework

Some time ago I gave presentation about Spock on Szczecin JUG. Later I gave this presentation also on my company SoftwareMill meeting. 

I think it's high time to share it on my blog too: http://amorfis.github.com/spock-pres/ (navigate with arrow keys).

Friday, November 2, 2012

FEST Assertions for Joda Time

Do you write unit tests? Of course you do. Do you use Joda Time? I think so. Do you use FEST Assertions? You should try it if you haven't yet. With FEST Assertions we can write fluent code like this:
assertThat(result).isEqualTo(expected);
assertThat(testRunSeconds).isLessThan(maxTestRunSeconds);
assertThat(someList).isNotNull().hasSize(3).contains("expectedEntry");
Now let's assume we have some functions that return joda DateTime, and we want to test it. Can we do this in FEST Assertions?
assertThat(resultDateTime).isAfter(timeframeBeginning).isBefore(timeframeEnd);
No, we can't :( FEST Assertions don't handle Joda Time classes. However, do not worry :) At SoftwareMill we have written our own TimeAssertions for that :) So you can write your code like this:
TimeAssertions.assertTime(someTime).isAfterOrAt(someOtherTime);
TimeAssertions works for org.joda.time.DateTime, java.util.Date and org.joda.time.LocalDateTime. You can freely exchange DateTime and Date, i.e. you can compare DateTime to Date, DateTime to DateTime etc. LocalDateTime can be compared only to instances of the same class, as it doesn't make sense to compare it to DateTime or Date without specifying the time zone. TimeAssertions is available on github. If you want to use it from Maven project, add repository:
<repository>
    <id>softwaremill-releases</id>
    <name>SoftwareMill Releases</name>
    <url>http://tools.softwaremill.pl/nexus/content/repositories/releases</url>
</repository>
And dependency:
<dependency>
    <groupId>pl.softwaremill.common</groupId>
    <artifactId>softwaremill-test-util</artifactId>
    <version>70</version>
</dependency>
Happy testing!

Wednesday, September 26, 2012

Get script's own directory in bash script

It was always a problem for me to get the directory the called script is stored in, in the script itself. Thanks to this SO question (http://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in) it's not a problem anymore. As it says:
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
Or, to get the dereferenced path (all directory symlinks resolved), do this:
DIR="$( cd -P "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

Saturday, September 15, 2012

Get specific PC IP from "arp -a"

I want to extract IP of machine leonidas. arp -a returns such line (among others): 
leonidas.home (192.168.1.5) at 0:1c:c0:de:8f:28 on en1 ifscope [ethernet] 

To have only IP:
arp -a | grep leonidas | cut -f 2 -d ' ' | sed 's/[()]//g'
prints
192.168.1.5