ScotSoft 2016 – Advanced Git and Github usage

I attended the ScotSoft 2016 conference way back in October 2016. I was asked to do a brief ten minute talk for the company that I work for monthly TechX event.  The talk that I chose to discuss was Advanced Git and GitHub by Mike McQuaid.  Mike works for GitHub and wrote a book called Git In Practice.  I thought it might be useful if I also wrote down in a blog post what I spoke about at the TechX event. 

So without any further ado, here it is:

Since joining the company I have used Git in a number of projects and for about a year and a half I have been GitLab Admin in our sector and a wee bit beyond. Using Git almost daily, you are just really using the commands clone, checkout, commit, push, pull, and occasionally branch.  Being a GitLab Admin, I’m trying to improve my Git skills and learn more because when someone comes to you, you want to help in any way that you can and give them the correct information. I thought the ScotSoft talk might help.

Mike’s talk was fast paced and involved him going through a large number of commands explaining what they were and giving small examples of how to use them.  I was interested in learning from someone that had advanced knowledge and eager to pick up any tips and tricks from an expert.  For me there can be a wee bit of fear when dealing with the edge cases when merges go wrong, commits go missing, or you push the wrong branch.

Git Commands

The talk was based around GitHub but Mike had to explain the Git commands to do the GitHub chat. I can’t remember him going into too much detail about GitHub, maybe the occasional url or reference to GitHub GUI. I vaguely remember him referencing SourceTree first in the talk but could be mistaken.

A list of commands that I had in my notebook are:

gitk (Windows and Linux) and gitx (Mac).

gitk comes with git but gitx is a clone of gitk for the Mac (open source). It allows you to see commits and branches visually. Clicking on each line allows you to see the commit message and the details around the commit. Before I knew this existed I used SourceTree (other Git GUIs available) for this purpose because I like a visual representation of my commits and branches.

Blame

git blame displays the every single of line and tells you who changed it. I have only ever seen this within GitLab by clicking a button and seeing a visual representation rather than at the command line.

Bisect

Scan through history quickly.

If you have a bug and you know at what commit that bug didn’t exist, you could traverse through your commits checking and marking that commit as bad until you reach the good one.

Examples:

  • git bisect start – kicks off the process by cleaning up any previous bisect operations.
  • git bisect bad – would mark the current commit as bad and checkout the next commit.
  • git bisect good – would mark the current commit as good.
  • git bisect run ls <filename> – have some of testing of that code for bug on each commit and let “run” go through the commands until that test passes.

rerere

Reuse recorded resolution. It is not enabled by default.  To enable you would type, “git config –global rerere.enabled true

If you are finding yourself fixing the same merge conflict again and again, this command can look at the input files and git knows you have solved it before and used the resolved version of the file instead of the conflicted one.

The git merge command would print out an extra message detailing that it has done the action above but you would still need to commit in case you did not want to go ahead with the action for any reason.

Mike has never seen the command doing the wrong thing.

Config

Personally configure git

Examples:

  • User details
  • Aliases

Describe

Generate a version number based on tags.  If you have a tag but have a one or more commits after the tag.

Example:

  • · V1.0- n- sha1

Reflog

It is very hard to lose information that you have committed.

A history of the commits is kept in Git (for about 30 days), every action is recorded. It has to be on the same machine. Think of it like when a banking website records the places/actions you have performed on their site or the government records every website you have visited for a year.

git reflog – commits to the head pointer.

Rebase

Moves the parent of a branch.

If you create a branch from a tag but then realise that the branch should have been taken from a release branch instead of deleting the branch and creating a new branch from the release branch you can rebase with the release branch and move the parent commit over to release. Rebase will replay the commits and rewrite the history.

After a merge conflict, you will need to run git rebase –continue

-i: advanced version that allow you play about with commits. You can move commits, edit commit messages, squash to meld commits.

Filter branch

Let’s the git user rewrite the history. Goes through the entire history to find and remove a file. You would need to get other people to re-clone their repository.

Cherry pick

The command can be used to select a commit from another branch typically and adds it to the current or another branch.

Cherry

It can be used to view if the cherry pick has gone a bit wrong. The man pages say that it would find commits yet to be applied to the upstream (I take the upstream to mean remote branch).

Subversion (SVN) / Team Foundation Server (TFS)

Git SVN is part of Git

Talks to SVN repo, not just for migration, can be used for SVN repo clone, create branches, merge and submit back to subversion.

Clone the trunk of the SVN repository and you can use git locally creating branches and merging those branches back into the main trunk then pushed back into the subversion repository.

git-tfs is provides the same function as SVN but you will need to download from GitHub and configure. This wasn’t part of the talk but I thought it was worth mentioning while talking about the inbuilt Git SVN.

Housekeeping – commit messages

Like an email, first line subject, other lines body.

Conclusion

In conclusion, I found the talk to be fast paced and sometimes that led to him talk about something with you saying “Wait a minute, what?” and Mike had already moved onto a different command or discussion.  As time did march on, Mike said at one point that he wasn’t going to cover some commands in any big detail but he designed his talks so that the listener could try these commands out and research them.

I did enjoy the talk and I now appreciate how difficult it is standing up in front of people trying to produce content. I had started a “TechX project” at home to showcase and understand some of commands during this talk. However, I don’t think it would have quite worked with the time and probably could not get a repository big enough to demonstrate all the commands.

What I got most from Mike’s talk is that there are the tools in Git that allow the user to recover and it removes ”the fear” when you are going to do something slightly advanced and out of your comfort zone.

A video of the same talk at Java One

Creating a user with Ubuntu terminal

Like most of the blogs I write I am writing with myself in mind a bit but if it provides help to anyone, that’s great.  One of the many reasons for writing this post is that next time I go to create a new user on any Linux server I don’t need to go to several blogs trying to remember parts of the command or why if I missed –a that it matters.

If you use the man command in the terminal e.g. man usermod that will provide an explanation of the commands and switches for that command.

Create a user via the terminal

Sign in with an existing user (via ssh or on the box) that has sudo access.

sudo adduser testcreatinguser

This command will create the user testcreatinguser and ask the person creating the user to create a new password.  I recommend that you select a strong password when creating a new password.

It will also ask for the user’s full name, work phone etc.  The only one that I didn’t leave to the default was full name, which I set to “Test User”.  If you need to add them then you can type different information.

After creating the user I wanted to add the new user account to the sudo group.

sudo usermod –aG sudo testcreatinguser

The “-a” is important.  The new user will be removed from any other group they belong to if this is missing. “G” is for groups in this case “sudo”.

Expire user account

If the new user account access needs revoked for any reason.  The command is as follows:

sudo usermod –expiredate 1 testcreatinguser

The message received is: Your account has expired; please contact your system administrator.

To allow access again:

sudo usermod –expiredate “” testcreatinguser

Lock password for user account

Another way to stop the new user gaining access is to:

sudo passwd –l testcreatinguser

After successfully entering the command into the terminal, the user will receive “passwd: password expiry information changed.”

This will refuse the user’s password when they try to log in.

To reinstate the user account’s password:

sudo passwd –u testcreatinguser

Again, the user will receive “passwd: password expiry information changed.”

Most of this information can be found elsewhere on the internet but it might prove useful to have it in one post.  Until next time.

Is it okay to fail?

I believe that I have conditioned myself to fear failure, that a mistake or an error in judgement is so bad that I am no longer a good colleague or person, that I don’t add value to my team. 

On the flip side this fear drives me to prove to myself and others that I can add value.  Fear of not being able to do a job that I love.  Fear of looking like an idiot in front of people I respect.

When writing this post I researched planning to fail in software development, one of the first searches I did brought back articles on how software projects fail, not any about how to learn from failure of the software project.  I find that amazing. The mantra, “Failure to plan, is planning to fail” was everywhere.

Why should I be so fearful?

In the software development world, isn’t it our failures that shape our career, our experiences, and our learning more than successes? If you think about it, software developers around the world are failing every day from defining logic incorrectly and having to find a different approach to the problem to finding bugs in their code or their users finding bugs in their code. 

Scientists fail but they use the evidence of their failure to continue and each failure gets them closer to a successful result. Even a toddler trying to walk for the first time fails but gets up and tries again until they can walk.

I believe that I need to start thinking of failure as a good thing.  To turn what is thought of as a negative into positive and stop the fear.    Treating every failure as a chance to get better and edging closer to a successful result.

How have we got to the point where failure isn’t planned for?

I think that one of steps to take is that failure should be planned for.  I have recently started to try and run every day “a running streak” and one of the pieces of advice that I have been given is plan to fail.  This lets me think of all the obstacles to stop me going a run and remove them by thinking of all the pitfalls ahead of time. 

One of the most obvious things that could help is testing, unit, integration, system, and manual testing.  The more tests that cover the code or each area of the system will mitigate the risk of failure.  What if you do write/complete all the relevant tests? If a developer makes a mistake and even the unit tests are wrong. You could have acceptance testing with the user.  What if the user doesn’t notice? Involve the user, communicate plans and failures to them.  There are numerous scenarios that you could go through.

What about building time into plans for rework and bug fixing? 

In a perfect world we would like to think that our code will not need re-worked or that any bugs will found but inevitably this will happen so why not build it in to the plan.  A user testing might not like a particular feature, the feature might not be correct, or a bug might be found. It would be better to be prepared.

Is failure accepted?

As software developers we can get into the hole of thinking that our code should be perfection and go down this silly road of thinking everyone else’s code should be as well.  We are not accepting of mistakes especially if the pressure is on.  We have all written code that six months down the line, we say to ourselves “What the hell was I thinking”.  We need to cultivate a culture of acceptance of failure.  Communicate with your team that it is okay.  Be humble and listen to the developer fixing the code.  Developers will then relax and produce better code as a result.  We write better code when we are happy!

So I am finally realising that it is okay to fail and that it happens more than we realise.  To not be too hard on myself (although that might take a bit of work).  It is healthy to fail.  Each refactor is a change to improve and get better even if a bitter pill to swallow or a brief period where you feel like an idiot.  I believe that we need to get away from defining success and failure as binary and try to think that all the failures encountered could lead to success…eventually.

I hope you find this post useful.  I would be interested to hear your thoughts?

Web API documention using Swagger / Swashbuckle

I have recently been tasked in work with documenting an existing Web API.  This Web API used an older version of Swagger, which had been customised to get around the features that the version did not address at the time. My task was to update that version of Swagger to use the most recent version of Swashbuckle from nuget.  SwashBuckle is used to add Swagger to Web API projects.

The first step was to add Swagger via nuget.

Install-Package Swashbuckle.Core

The next step is to customise the Swagger configuration.  If you use OWIN like our API did, then the Swagger configuration would go into the Startup.cs class.  A basic configuration would look like this;

httpConfiguration
    .EnableSwagger(c => c.SingleApiVersion("v1", "A title for your API"))
    .EnableSwaggerUi();

So with the basic configuration out of way, how do you describe what your API actions do?

Go to the project properties – Build – Output.  There will be a checkbox for enabling the XML comments and then a path to an XML file will need to specified, something like “Project.xml”.

Then each API controller action would be decorated with the XML comments that should be familiar but if not:

/// <summary>This is a example</summary>

/// <param name=”example”>Description of example parameter</param>

/// <return>Example return description value</return>

So now to navigate to the Swagger URL.  It should no build the documentation based on the XML comments

https://<your-url>/swagger

The url doesn’t have to be the default.  This can be customised in the Swagger config and changed in “EnableSwaggerUi”

httpConfiguration
    .EnableSwagger(c => c.SingleApiVersion("v1", "A title for your API"))
    .EnableSwaggerUi(“docs/ui/app/{*assetPath});

The assetPath parameter will add “index”.  The “index” page can also be customised but it is recommended that you use the template provided by the Swashbuckle site and then you can change the bits that you need to update.  Using a custom page can be achieved by altering Swagger UI section in the config and setting CustomAsset to point to your own “index” file where ever that is in your solution.

The configuration of Swagger is quite extensive and there isn’t a lot you can’t do. 

Issues

I was documenting an existing API and found out that it doesn’t really deal with inheritance (an action was on a base controller).  On doing a bit digging, I seen a few things that stated this was by design.

The other problem I had was with the response XML tags

/// <response code=”200”>Valid example request</response>

Adding the XML tags didn’t work for me, these comments were not added to the Swagger documentation.  I had to rely on Swagger Response data anotations to correctly document the response codes and what they meant in context of our application.

Use the “<remarks>” tag for the implementation notes.

Conclusion

In conclusion, I found Swagger easy to work and easy to configure.  There is plenty configuration options that I didn’t use and might suit your project.  The documentation is good and can only get better.  Swagger gives you a decent alternative to Word documents and keep it all together in one place.

As ever if anything is incorrect or inaccurate, please let me know.

Updating SourceTree and Git Bash

I recently updated my SourceTree application and found that I could no longer use the Git Bash terminal.

This error appeared when I clicked on the Terminal button at the top right corner of the application

It has not been possible to start the Git Bash terminal”. 

This isn’t the most informative error message that I’ve ever seen.  I typed the error message into Google to see if I could get a better description of the issues that would cause SourceTree to behave in this manner.   Most of the advice centres on upgrading Git Bash to v2.6.3 or rolling back to previous version of SourceTree.  There are a couple of tickets raised for this issue on the public SourceTree Atlassian Jira however the ones that I have seen are either duplicates, which have been closed and refer to tickets that I need to log in to see or they are slight variations of the issue.  So I thought I’d document what I did and see if it will help someone out.

Steps:

  1. Download version 2.6.3 of Git.  You can either go to https://git-scm.com/ or I’ve started to use https://chocolatey.org/ to install new software on my machines.  You need to install chocolatey first (the instructions are on the main page) and then run whatever command line application that you use.  Type something like “choco install git”, this will go away and download the chocolatey package and then silently install the application.
  2. If you haven’t set up chocolatey then double click on the downloaded application.  This will run the setup for Git.  You can set this up the way that you like it but I went with the defaults (aside from selecting the options to create shortcut on the desktop and adding Git Bash to the right click menu).

Note: I experienced several issues relating to the installation, which might be only on my local machine but worth documenting none the less.

Issues:

  1. At the end of the install, it would hang and never complete.  I had to end the task using task manager and then delete the directory that it had installed into.  Then restart the installation.
  2. Occasionally on some attempts at the install, it would try to uninstall the last version.  It would end in failure because it could not find the unins000 files and give you no choice to abort.  The files were in the installation folder however, I could not reference them.  On this occasion, I used Revo Uninstaller to get rid of all references.
  3. If I didn’t get rid of all references to the folder where it was installing to, it would fling random errors relating to files not being found to it could not create directory tmp.

The safe thing to do is to clear the directory out.  I’ve also got it installed on the root of my C: drive.

Now with Git install, we are onto starting Git Bash independently of SourceTree.  When I first started bash.exe, it was just disappearing.  When I started the the bash.exe in a command prompt, I could see that it was giving the error message “Access is denied”.  This was because my firewall Comodo was blocking the executable from running.  I had to add bash.exe to the list of allowed applications for Defense+ within the HIPS section.  To do this I went to Comodo (it is on Advanced View).  You can select this option by right-clicking on the Comodo tray icon and selecting “Advanced View”. 

Steps:

  • Open Comodo
  • Click on HIPS
  • Settings should open up
  • Select HIPS Rules
  • Click on upwards arrow at the bottom of the screen.
  • Click Add
  • Browse for the application (in my case, C:\Git\bin\bash.exe)
  • Use ruleset of allowed application
  • Click OK.

You might need have to add mintty.exe (https://code.google.com/p/mintty/) in the Git\user\bin as well but it is same process as above.  The Git Bash should now start as the firewall is no longer treating the application as a blocked intrusion. 

The last thing to do, is to open up SourceTree and click on terminal.  Git Bash should now open. Also, go to Tools –Options – Git.  If you use system git the version should now be 2.6.3.

In conclusion, this is task that shouldn’t take that long but can be frustrating if you hit small errors.  I hope if you are hitting these errors then this post will help.  As ever if there are better ways of doing anything described here I would like to hear it.

GitLab: Reload with full diff

Recently I had to code review a merge request with a large number of changed files.  There were so many files that GitLab could not show the full diff.

In this circumstance, GitLab gives the user the option of reloading the full diff with warning that it could affect the performance.  I needed to see the full diff because the other option was going through the file list and using another diff tool on the individual files to see what had changed, good or bad.  That wasn’t really what I wanted to do.

When I clicked on the reload button, after a few seconds, Notepad opened up with the full diff inside it.  It wasn’t easily legible. This wouldn’t have been my preferred option.

Solution:

I found out that tweaking the url slightly fixes the issue:

project/merge_requests/<merge_id>/diffs.json?force_show_diff=true

Removing the “.json” from the url will show the diff where you would expect it to be, in GitLab.  It is formatted nicer than Notepad.

It has been reported that this had been fixed in version 7.10 but at time of writing our version of GitLab is at 7.13.4. 

Until it is finally resolved, the above solution is a workaround if you run into this problem.

#Windows10 My upgrade experience

The minute the word “free” was mentioned I had decided to upgrade my PC to Windows 10.  I have not had any bad experiences from any previous windows upgrades that I’ve done so I decided to go for it.

I didn’t wait on the official notification from Microsoft because it had started to bother me that it was now the 2nd August and I could see social media lighting up with advertisements and experiences (good and bad).  So I decided to take the plunge, what could go wrong, right?

So dived into my email to pick out one of the many emails Microsoft had sent about upgrading to Windows 10, downloaded the media creator tool and started the process.  The media creator tool allows the user to upgrade right away (with all the caveats of “make sure you have a backup”, ringing in your ears.) or download a version of Windows 10.  I chose the second option first and created my backup then downloaded my media in the iso format.

The installation itself went okay.  It told me at the beginning that my version of Acronis True Image was not compatible (fine, I’ll upgrade) and it was reasonably quick. 

The issues came when I first logged in, when I clicked on the Start menu I got nothing.  The same with settings and action centre.  I could however right-click on the start button and get options to sign out.  A quick Google showed a number of people were getting the same issue and were running a this command in Powershell to solve the issue.

  • Press Windows Key + R on your keyboard.
  • Key in PowerShell and hit Enter.
  • Right click on the PowerShell icon on the taskbar and select Run as Administrator.
  • Now paste the following command in the Administrator: Windows PowerShell window and press Enter key:
    Get-AppXPackage -AllUsers | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register “$($_.InstallLocation)\AppXManifest.xml”}
  •  

    This command did not work for me so I quickly created a new profile to see if it was anything to do with the installation.  I was fearing at that point I would need to revert the upgrade back to Windows 7.  That was only a concern because of the tinkering that I may need to do to get it working as before.  When I created the new profile everything just worked, although all my settings had gone as you would expect.

    At this point concerned that the problem might be do with local accounts I checked another profile (Mrs I.D, the only profile that really matters in our house) and it was working as expected (phew!).

    I decided to press on with the new profile for the minute to test out the new Acronis True Image 2015 (compatible with Windows 10), it was this point where I hit the blue screen of death (which wasn’t actually blue, more turquoise) and this restarted my computer.  This was consistent.  I didn’t try running it with any different settings because it explicitly stated that it was for Windows 10.  Every cloud has a silver lining through because I found this backup tool which fulfils my needs, Bvckup 2 and decided to ditch Acronis for good after a few weeks trialling Bvckup 2.

    Another issue I had recently was between Chrome and Comodo Firewall.  The latest upgrade to Chrome (45) stopped working and this was caused by an incompatibility in Comodo I think.  One of the fixes was to install Chrome x64 but I have got that running.  The fix that worked for me was

    1. Going to Advanced View of Comodo
    2. Click on HIPS
    3. Finding Detect shellcode injection
    4. Click on the Exclusions link
    5. Add the location for the chrome.exe into the exclusions.

    This error was caused by the guard32.dll in Comodo and although it is not strictly recommended, it was one of three workarounds given until a permanent fix can be found.

    My Bluetooth headphones have also been a bit temperamental.  I could not get them to connect and had to go various cycles of pairing without success.  Then the other day they just connected and have worked every since.  My drivers are up to date and everything I’ve checked looks okay. 

    So the only thing left for me to do now was to set up the new profile so that I could continue to use my machine day to day.  I’ve not had many problems since and I continue to try and work out what the problem is with my Windows 7 profile but so far nothing has worked.  It will no doubt be my pet project until I get a new machine.

    I have also upgraded another computer to Windows 10 without any issues at all so in conclusion, my experience was not that bad, yes there were problems but nothing I couldn’t fix or recover from.  Windows 10 being new there was always going to be small issues.  Would I do it again? Absolutely! Even it was just to frustrate myself.

    Until next time……

    Am I doing enough?

    It’s Saturday afternoon, you are spending some quality time with the family but your brain is nagging at you to switch the laptop on or go upstairs to your computer and start knocking out this wonderful new application full of the stuff that you want to learn or improve your understanding of.  You could also catch up with a bit of work, do a few emails, save time on Monday morning.  You will only do a little bit.  It will take only half an hour….  If this is familiar feeling, I don’t think you are alone.

    I’ve seen a few posts over the years including I’m a phony. Are you? by Scott Hanselman and Syndromes Drives Coders Crazy in Business Insider.  It interests me and I want to research a bit more.

    The closest word that I can associate with that feeling is guilt (developer guilt) but it isn’t really that.  I constantly feel a nagging that I am not doing enough and I always attributed to a need for absolute perfection. 

    Most software developers see their job as a craft and as part of that, you need to hone the skills to make you a better craftsman.  As anyone that wants to do well, you need to learn and practice.  For a developer, this is coding, writing blog posts, reading books, researching topics of interest, social media, and podcasts.   I do all of these points (not all at once) and still feel guilty or feel like a phony if I don’t organise myself to give myself the time to do it properly or don’t give my full focus to the task.

    Question: Is it okay or even good for you that you spend every waking minute at a computer and don’t give you brain time to switch off to focus on something else? I don’t believe it is. There is a balance that needs to be struck, there is no doubt.  There is a real risk of burn out and lower productivity if you are at the coal face for extended hours on a regular basis and then that makes you no use to anyone least of all yourself and the people you care about. 

    So here’s what I do stop these feelings, everyone likes a list, right? I try to sit down on a Sunday and plan my week.  I do this for non-technical tasks as well like making dinners and exercising.  I feel it is better to plan as much as possible.  I personally use Trello but it can be Google calendar tasks, paper, or anything else that you are comfortable with.  I set my board up with lanes for each day of week.  I also have a lane where I initially put all the tasks as I think of them.  Secondly, I will think how long that the task will take or how long that I will spend doing that task.  I use the pormodoro technique and each pormodoro is twenty five minutes.  I have set up coloured labels, each representing a number of pormodoros’ and these are attached to each task.

    This helps me to feel better that I’m doing everything I can possibility do to stay current and add value to my career and my workplace.  It allows me to have time with the family but still have the focus to learn and to stop the worry.  I like to worry.

    Another lesson is not to not be too hard on yourself, there will be days when you don’t feel like sitting at the computer reading blogs.  There will be days when everything goes wrong.  There will times that you don’t understand what you reading.  There will be times that someone will ask a question and you won’t know the answer, but that’s okay. 

    The point is that you are asking yourself the question “Am I doing enough?” in the first place and taking the steps to resolve the situation. 

    Setting up SSH keys for a Git repository using SourceTree and BitBucket

    For the past year or so, we’ve been using Git as our version control system.  My introduction to the GUIs around Git was SourceTree (although I’ve made an effort to learn the commands) but I have also used poshgit and Git Bash.  Recently, we’ve started using SSH keys instead of HTTPS and I had to learn how to set up my repositories with SSH.  Everywhere and everyone tells you this is straight forward and it is when the critical path works but when something is wrong, it gets more difficult.  A lot of unnecessarily complex documents does not help either.  So I’m going to details all the steps that I took in the hope that it could helps someone.

    My setup for this task is Git (you can use the embedded git within SourceTree), SourceTree and BitBucket (previously used Google Drive to host my git repositories).

    Stage 1 – Generating a SSH key

    • Open SourceTree and click on the Terminal icon (this is Git Bash)

    SourceTree_Ribbon

    • Type the following command in
      • ls –all ~/.ssh (this will list any existing ssh keys in C:\Users\\.ssh, this is the default but can be changed when generating the key).
    • Next, generate the key
      • ssh-keygen –t rsa –b 4096 –C
      • It will ask you where you’d like to store the files, I accepted the default but you can specify a directory if you wish.
      • Then enter a passphrase, I would recommend you provide a passphrase from a security standpoint.
      • You should now see this this:
    Your identification has been saved in /Users/you/.ssh/id_rsa.
    # Your public key has been saved in /Users/you/.ssh/id_rsa.pub.
    # The key fingerprint is:
    # 01:0f:f4:3b:ca:85:d6:17:a1:7d:f0:68:9d:f0:a2:db your_email@example.com
    • There should be two key files id_rsa (private) and id_rsa.pub now created.

    Stage 2 – SSH-agent

    • Still using the terminal (Git Bash) in SourceTree, type:
        • eval $(ssh-agent).  There are many ways to start the SSH agent but this is only way it would work for me.  It should give you a process id back, something like, Agent pid 1234
    • Finally using this command to add the new key
      • ssh-add ~/.ssh/id_rsa
      • If successful, the output should say that an identity has been created.
      • You should never have to type in the passphrase again.

    Stage 3 – Added the SSH key to your BitBucket account

    • Log into BitBucket
    • Select the icon on the top right of the browser and select Manage Account
    • From the Security menu, select SSH Key then Add Key
    • Add you public key (id_rsa.pub) to the text area and then Add Key again

    Note, your public key in this file is in a different format from what BitBucket expects.  My recommendation for this scenario is to go to SourceTree – Tools – Create or Import SSH Keys.  This starts a Putty Generator that has the ability to load existing keys.  The generator will then show the public key in a user friendly format to be copied and used within BitBucket.

    putty_generator

    Stage 4 –SourceTree

    In Stage 1, the SSH key was generated and set up for the Git Bash terminal, now we want to take that SSH key and use it within the SourceTree GUI.

    • First step is to go to Tools – Create or Import SSH Key
    • Load your existing private key in.
    • Click on “Save Private Key”.  This has to be saved in the Putty .ppk format. I would recommend that you didn’t save this private key to the .ssh folder in case of conflicts between two keys.
    • Next is to launch the SSH agent – Putty comes with SourceTree.
    • Make sure Pagent is running ( little computer with a hat on sitting in your windows tray).

    Window_Tray_Pagent

    • Add the key to the SSH agent by right clicking on Putty Pagent and selecting “Add Key”. It is Pagent that stops the user from entering the passphrase all the time by holding key and making it available to SourceTree.
    • A further step is to add the .ppk key to Tools – Options – General – SSH Client Configuration.

    That’s it! I was all around the houses trying to fix various errors and configure.  Some of the problems I faced were:

    • Permission denied (public key).  I believe it was a combination of errors on my part.  One, I had created too many key files in the .ssh directory and it didn’t know what one to choose.  Second, I hadn’t set up SourceTree correctly.  The SSH key had to be a .ppk key and not the id_rsa key, which I’d generated.
    • Could not open a connection to your authentication agent.  I believe this was down to me changing from Putty to OpenSSH.  OpenSSH just never launched, no wonder it couldn’t get a connection.
    • It took ages to clone a repository.  SourceTree GUI doesn’t give a lot of feedback with what is going on, not like Git Bash.  I thought it wasn’t working.

    My tip would be to test the connection using “ssh –T git@bitbucket.org”.  This command with provide decent feedback if you have or haven’t authenticated.  So open Git Bash and type this in.

    A good topic for debate is why go to all the trouble of using SSH keys? Why not, use HTTPS and cache you account details in winstore?

    Update:

    Discovered this morning that if you shut SourceTree down, if you use the Git Bash terminal, you will need to repeat Stage 2.

    References

    Learning AngularJS–Part 4

    Sorry, it has been a while since my last post. 

    To finish off the first series of my AngularJS posts, I would like to talk about how I went about updating an existing person and a few other things that I have not covered up to now.

    For those who have not read the previous posts you can reach them here The Beginning ,Part 2, and Part 3

    Like adding a new person to our personnel record, the application will use the routing engine to determine where to go (“#” represents the routing engine) so our route this time is #/updatePerson.  The route can then work out what controller and partial to use for updating.

    Our route provider should look like this:

    $routeProvider.when("/updatePerson/:id", {        

    controller: "singlePeopleController",        

    templateUrl: "/templates/updatePerson.html",   

    });

    The “:id” part of the url allows the id of the person to be set.  I’ve kept this in the one file but it might be better to split these up into multiple files, my intention is to do that.

    The singlePeopleController then retrieves the person using the data service which I created to avoid writing the same code over and over again.  So the first action is to retrieve the person using the id that you have passed in.  I originally retrieved the person by accessing an person object stored in memory instead of using the api to go back to the database but I found that the update was going wrong.  The details were not getting updated, the application would overwrite the new details with the old details.  It was like the save was not happening at all.

    After returning the right person, I proceeded with the update.  The person was added to a $scope.Person.  I passed the $scope.person to the updatePerson data service method.  This posted the data to the api and in turn saved it to the database.  If it was successful, it would return it the beginning using $window.location = “/#”. 

    var _updatePerson = function (existingPerson) {         var deferred = $q.defer();         $http.put("/api/People", existingPerson)             .then(function (result) {                 var updatedPerson = result.data;                 _people.splice(0, 0, updatedPerson);                 deferred.resolve(updatedPerson);             },             function () {                 deferred.reject();             });         return deferred.promise;     };
    function singlePeopleController($scope, dataService, $window, $routeParams, $log) {     $scope.person = null;     $scope.$log = $log;     //$log.log("Id : " + $routeParams.id);     dataService.getPerson($routeParams.id)         .then(function (person) {             // Success             $scope.person = person;             $log.log("Person : " + person.Salutation);         },         function () {             alert("Cannot find person");             //$log.log("Cannot find person");             $window.location = "/#";         });     $scope.update = function () {         dataService.updatePerson($scope.person)             .then(function () {                 $window.location = "/#";             },             function () {                 alert("Cannot update person");             });     };
    }

    Instead of save() this time I use update().  Update is $scope.update a function which the form uses when the submit is called. i.e. clicking the update button.

    One of the issues that I did have to solve was how to put the updated salary into the salary list for that person.  I wanted to search for that person and show the historical salaries for the personnel. 

    person.Salary[0].Salary
    

    I’m still not 100% sure if that is correct way of doing this but it works for now.  I went to the forums asking in the Angular JS forum, a couple of people had replied saying that this was a good way to access the list and update it.  I still need to do a bit work on bringing the updated value.

    What I still have left to do is implement a search which allows user to search for a person and it would details of their salaries and I would like to add some testing around AngularJS with the ones that I played with is Jasmine / Karma.  That’s because they are on the AngularJS site.

    The only other function that I have in the data service is FindPerson, it loops around the people object stored in memory and return the found one if the id of the person matches. 

    function _findPerson(id) {         var found = null;         $.each(_people, function (i, item) {             if (item.Id == id) {                 found = item;                 return false;             }         });         return found;     }

    Another thing completely off topic and I can’t get right is how to properly format code in blog post.  The formatting seems to go totally awry.  Anyone who knows can contact me please.

    I promise I won’t leave it so long next time…my plan is to regularly update this blog so that my technical writing, communication and blog posts improve.