Search This Blog

Wednesday, 27 February 2013

Ubuntu, Minicom & Cisco

Install Minicom

Find the name of your serial port
Next, you need to find out is which device your serial (including the USB adapter) ports are mapped to. The easiest way to do this is to connect the console cable to a running Cisco device. Now open up a Terminal using "Applications > Accessories > Terminal" and type this command:

dmesg | grep tty

The output will look something like one of these:

[    0.788856] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    0.789144] 00:08: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[94023.461242] usb 2-1: pl2303 converter now attached to ttyUSB0
[107561.131086] type=1503 audit(1260922689.994:33): operation="open" pid=27195 parent=27185 profile="/usr/sbin/cupsd" requested_mask="w::" denied_mask="w::" fsuid=0 ouid=0 name="/dev/ttyUSB0

Look in this output for words that contain "tty". In this case, it is "ttyS0". That means the name of the device that corresponds to your serial port is "ttyS0". The name of your device that corresponds to your USB port has a definition of name="/dev/ttyUSB0" (make sure it's plugged in). Now we are ready to configure Minicom to use this information.

Configure Minicom

Open a terminal using "Applications > Accessories > Terminal". Now type this command to enter the configuration menu of Minicom:

sudo minicom -s

Use the keyboard arrow keys to select the menu item labeled "Serial Port Setup" and then hit "Enter". Here is what I had to change:

Change the line speed (press E) & change to "9600"

Change the hardware flow control (press F) & change to "No"

Change the serial device (press A) & change to "/dev/ttyS0"

Or to use your USB port, change the serial device to "/dev/ttyUSB0"

Be sure to use the device name that you learned with the grep output.
Once your screen looks like mine, you can hit "Escape" to go back to the main menu. Next, you need to select "Save setup as dfl" and hit "Enter" to save these settings to the default profile. Then select "Exit Minicom" to exit Minicom...

To find out if you have configured Minicom correctly, type this command in the terminal:

sudo minicom

After entering your sudo user password, you should be connected to your Cisco device.

Once inside, press Ctrl+A, to access minicom commands. Press 'Ctrl+A', then 'Z' to access help. Ctrl-A, then another letter, like 'X' & you will eXit. Help will show a list of available commands.

JIRA Jelly script - Transition to Closed

Over the past year or so I've become a big fan of Atlassians JIRA. I've managed to frig around and get a pretty neat ticket/request solution in place for my company but in doing so I've also become the defacto goto person.

A new project needed to just receive emails, create an issue and then close it. Steps 1 & 2 are simple with IMAP & Mail listeners. But I wasn't aware of a way to carry out 3.

Enter stage left Jelly Scripts. Scripting language supported within JIRA. Turned out to be rather simple and a couple of hours work turned out :

<!-- This script will parse all tickets matching "${filterNum}" and transition them to Closed state. -->
<!-- Paul Regan 27/2/2013 (Thats a UK Date people !) -->
<JiraJelly xmlns:jira="jelly:com.atlassian.jira.jelly.enterprise.JiraTagLib" xmlns:core="jelly:core" xmlns:log="jelly:log">
<!-- Login as automation user  -->
      <jira:Login username="<jira-user>" password="<password>">

<!-- Set Some variables  -->
      <!-- 2 = Close Issue Transition (NB//TRANSITION NOT STATUS).  Can be seen on the transition URL -->
      <core:set var="workflowStep" value="2" />
      <core:set var="workflowUser" value="<jira-user>" />
      <core:set var="comment" value="This topic has been closed by jelly script automation" />
      <!-- Run the SearchRequestFilter Against a filer.  15231 = All Tickets -1 Day or 15232 = All Open Tickets-->
      <!--The numeric comes from the filters URL -->
      <core:set var="filterNum" value="15232" />
      
<!--Run the search using filter defined above -->
<jira:RunSearchRequest filterid="${filterNum}" var="issues" />

<!-- Build array of issues matching filter & run through it -->
      <core:forEach var="issue" items="${issues}">
      <!-- Log updates are written to /opt/atlassian/jira/data/log/atlassian-jira.log. -->
                <log:warn>Closing issue ${issue.key}</log:warn>
                <jira:TransitionWorkflow key="${issue.key}" user="${workflowUser}" workflowAction="${workflowStep}" comment="${comment}"/>
                
                <!-- Useful debugging aid.  Remark the actions and just use this to write a comment in results -->
                <!-- <jira:AddComment comment="This would be closed" issue-key="${issue.key}"/> -->
                <!-- Useful debugging aid.  Remark the actions and just use this to display results -->
                <!-- ${issue.key} -->

      </core:forEach>
      </jira:Login>
</JiraJelly>



Sunday, 24 February 2013

Raspberry PI & OpenVPN


The majority of these instructions come from : blog.remibergsma.com and have been reproduced with kind permission.

Like most things with Linux my working solution was actually a culmination of information from various places.

sudo apt-get install openvpn

After the install finishes, you need to generate keys for the server and the client(s). OpenVPN ships with the ‘easy-rsa’ tool. It’s easiest to copy the example folder and work from there.

sudo cp -R /usr/share/doc/openvpn/examples/easy-rsa /etc/openvpn
cd /etc/openvpn
sudo chown -R pi:pi *
cd /etc/openvpn/easy-rsa/2.0

The ‘easy-rsa’-tool has a file called ‘vars’ that you can edit to set some defaults. That will save you time later on but it’s not required to do so.

Load the vars like this (note the two dots):

. ./vars
(dot space dot/vars)

Generate the keys:

./clean-all
./build-ca
./build-key-server <server>
./build-key <client-name>
./build-dh

The first line makes sure we start from scratch. The second generates a key for the Certificate Authority. The key for the server itself is generated on the third line. Repeat the forth line for each client that needs to connect. Finally, we need the Diffie Hellman key as well, which is generated on the fifth line and will take a few mins to complete.

Copy the keys to the OpenVPN folder.

sudo cp ca.crt ca.key dh1024.pem <server>.crt <server>.key /etc/openvpn

Last step is to configure the server. You can copy the example config and make sure it points to the certs you just created.

sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn
sudo gunzip /etc/openvpn/server.conf.gz
sudo nano /etc/openvpn/server.conf


Change any settings (dchp scope, OpenVPN port etc) that are particular to your install in server.conf

When you’re done, start OpenVPN like this:

sudo /etc/init.d/openvpn start

The first time I started OpenVPN it failed with :


/etc/var/log/syslog
<snip>
raspberrypi ovpn-server[22119]: OpenVPN 2.2.1 arm-linux-gnueabihf [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Apr 28 2012
raspberrypi ovpn-server[22119]: NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables
raspberrypi ovpn-server[22119]: Diffie-Hellman initialized with 1024 bit key
raspberrypi ovpn-server[22119]: TLS-Auth MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ]
raspberrypi ovpn-server[22119]: Socket Buffers: R=[163840->131072] S=[163840->131072]
raspberrypi ovpn-server[22119]: ROUTE default_gateway=192.168.99.1
raspberrypi ovpn-server[22119]: Note: Cannot open TUN/TAP dev /dev/net/tun: No such device (errno=19)
raspberrypi ovpn-server[22119]: do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0
raspberrypi ovpn-server[22119]: /sbin/ifconfig 10.8.0.1 pointopoint 10.8.0.2 mtu 1500
raspberrypi ovpn-server[22119]: Linux ifconfig failed: external program exited with error status: 1
raspberrypi ovpn-server[22119]: Exiting
</snip>

Another VPN app I have which also uses /dev/net/tun failed with the same error.  Reboot fixed this and so far its not come back.

Check the state of the TUN0 interface

ifconfig tun0

All being well you’ll see:

tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255
 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
 RX packets:49 errors:0 dropped:0 overruns:0 frame:0
 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:100
 RX bytes:3772 (3.6 KiB) TX bytes:1212 (1.1 KiB)

You should now be able to connect to the OpenVPN server with a client. Which ever client you choose you will need the client.crt, client.key and ca.crt files plus the ip-address of your Raspberry Pi.

I chose TunnelBlick which after a rather convoluted profile setup seems to work well on OSX 10.8.2 (ML)

Have a look at ‘/var/log/syslog’ to access the logfiles. You’d be able to see which client connects:


Jan 5 22:07:56 raspberrypi ovpn-server[14459]: 1.2.3.4:64805 [client-name] Peer Connection Initiated with [AF_INET]1.2.3.4:64805

From the VPN client check that you can ping the LAN IP address of your RPi, assuming that works then you just need to push some routes around and you should be set.

VPN Client----VPN Subnet---RPI---LAN Subnet

To enable traffic from the VPN network to your local subnet you will need routes on each end to tell devices how and where to send traffic. To enable this on the VPN site :

sudo nano /etc/openvpn/server.conf

Find the push routes section and add a 'push route' statement which reflects your local network address.

You will also need to add a route back to the VPN Subnet, probably by adding a static route to your internet edge device.

Finally, enable routing on the Rasperry Pi:

There are a couple of ways suggested for this but what worked for me :

sudo nano /etc/sysctl.conf
uncomment : net.ipv4.ip_formward=1

Reboot your device.  You should now be able to connect to the VPN and ping other devices on your local network and vice-a-versa to VPN clients.

Saturday, 23 February 2013

Foscam F19821W & Apache Reverse Proxy

Finding myself with a few redundant Foscam F19821W cameras in my possession I thought I'd set them up around the house.

Getting them working with the browser plugin was relatively painless and gave live view and everything you'd expect from the manufactures app.

The next logical step was to access to them from anywhere.  The cameras come with UPnP and a DDNS setup.  No, don't want that, I want control of what comes in and out.

The installed firmware only allowed H.264 streams. An update to 1.1.1.10 and running :

http://<camera_ip>:<port>/cgi-bin/CGIProxy.fcgi?usr=<user>&pwd=<password>&cmd=setSubStreamFormat&format=1

Which enables a MJPEG stream which you can consume using a browser or something like VLC:

http://<camera_ip>:<port>/cgi-bin/CGIStream.cgi?cmd=GetMJStream&usr=<user>&pwd=<password>

I now have a couple of options to make these available outside.
  1. Port forwarding each Foscam port on my internet router. << Easy
  2. Reverse proxy. << Not so easy
Of course I wanted the not so easy and a single place to control and distribute access.  I don't like the idea of exposing the cameras directly.

Reverse proxy consisted of using my goto device Raspberry Pi and Apache.  Took a while to get the config nailed.

I'm not going to go into the entire Apache setup but I chose to create a virtual host :


 <VirtualHost *:80>
 ServerAdmin 
 ServerName <host>.<domain>
 ProxyRequests Off
 ProxyVia Off
 RewriteEngine On
 
 <Proxy *>
  Order deny,allow
  Allow from all
 </Proxy>
 # Used for iFrames
 ProxyPass /foscam1/ http://<camera_ip>:<port>/
 ProxyPassReverse /foscam1/ http://<camera_ip>:<port>/

 DocumentRoot /var/www/foscam
 <Directory /var/www/foscam>
  Options Indexes FollowSymLinks MultiViews
  AllowOverride None
  Order allow,deny
  allow from all
 #Rules to rewrite camera urls
 RewriteEngine On
 RewriteRule ^cgi-bin/(.*)$ /camera1/cgi-bin/$1 [L]
 RewriteRule ^css/(.*)$ /camera1/css/$1 [L]
 RewriteRule ^images/(.*)$ /camera1/images/$1 [L]
 RewriteRule ^lg/(.*)$ /camera1/lg/$1 [L]
 </Directory>

# ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
# <Directory "/usr/lib/cgi-bin">
#  AllowOverride None
#  Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
#  Order allow,deny
#  Allow from all
# </Directory>
  
 ErrorLog ${APACHE_LOG_DIR}/home-error.log
 CustomLog ${APACHE_LOG_DIR}/home-access.log combined
</VirtualHost>

Hit your public IP with the /foscam1/ URI and it will redirect to your camera.  You can consume the MJPEG stream and get to the management app, but no live view due to the way Foscams plugin works with the cameras media port.

Alternatively build a simple html page with iFrames that Apache will serve each camera stream, which is what I did.  Its also a good idea to wrap some Apache authentication around this and if you have the option use DDNS to clean up the URL if your on a DHCP internet link.

I don't yet know if I'll leave it this way.  I doubt Foscam have a particularly robust security ethos and after this weeks amazing amount of hacks its only a matter of time before a vuln is found.  OpenVPN is next on the agenda so I may put all this behind that.

Friday, 8 February 2013

Raspberry PI Wireless WPA-Enterprise

From my previous post the hardware was working fine and I'd already had the RPI connecting to my home WPA2 networks.

Finding a working WPA2-Enterprise config was painful exercise in trial and error.
Using :

sudo wpa_supplicant -iwlan0 -c/etc/wpa_supplicant/wpa_supplicant.conf -Dwext -dd -s

-dd = Double debug
-s = Will send the output to /var/log/syslog

Final WPA-Enterprise WPA Supplicant configuration

cat /etc/wpa_supplicant/wpa_supplicant.conf
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
ctrl_interface_group=root
update_config=1
ap_scan=1

network={
 ssid="<ssid>"
 scan_ssid=1
 key_mgmt=WPA-EAP
 pairwise=CCMP TKIP
 group=CCMP TKIP
 eap=PEAP
 identity="<uid>"
 password="<password>"
 phase1="peapver=1"
 phase2="MSCHAPV2"
 pac_file="/etc/wpa_supplicant/pac"
}

Hint : If you have both ETH0 and WLAN0 enabled and cabled then ETH0 will connect first so the routing table will reflect its default gateway.  If each interface is on a different subnet then you will have issues with connectivity if you unplug ETH0 - the default route goes away.

Raspberry PI Wireless WLAN0 won't work without ETH0

There are plenty of other sites documenting how to initially configure wireless so I'm not going to repeat them.  This is specifically about a weird issue I had configuring the card for home.

No hardware issues, the latest Raspian build found the Edimax USB card with no issues.  My problem was the wlan0 interface would not work without eth0 also being active.

Further investigation led me to find the MAC address both the wlan0 & eth0 IP responded from was in fact the eth0 address.

Now this goes against everything I understand but in my case with the RPI ethernet cable plugged in:

Take note of the HWaddr on each card.


$ ifconfig
eth0      Link encap:Ethernet  HWaddr b8:27:eb:b0:0c:39  
          inet addr:192.168.99.75  Bcast:192.168.99.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1


wlan0     Link encap:Ethernet  HWaddr 80:1f:02:82:33:24  
          inet addr:192.168.99.78  Bcast:192.168.99.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1


Then from another workstation, in this case I'm using NMAP:

$ sudo nmap -sn 192.168.99.75  **<< - ETH0**

Starting Nmap 6.25 ( http://nmap.org ) at 2013-02-03 10:19 GMT
Nmap scan report for 192.168.99.75
Host is up (0.020s latency).
MAC Address: B8:27:EB:B0:0C:39 (Raspberry Pi Foundation)
Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds
Paul@lo-mbp-preg / $ sudo nmap -sn 192.168.99.78

$ sudo nmap -sn 192.168.99.78  **<< - WLAN0**

Starting Nmap 6.25 ( http://nmap.org ) at 2013-02-03 10:19 GMT
Nmap scan report for 192.168.99.78
Host is up (0.0044s latency).
MAC Address: B8:27:EB:B0:0C:39 (Raspberry Pi Foundation)
Nmap done: 1 IP address (1 host up) scanned in 0.07 seconds


You can see that the MAC Address/HWAddr for both ETH0 and WLAN0 is the same, and matches the ETH0 HWAddr from ifconfig.  So in my case the Wireless was not working and all traffic was passing via ETH0

I never actually found 'the reason' for this.  In the process of debugging it started working reliably.  Which I hate.  But this config is now working :

**/etc/network/interfaces**

auto lo

iface lo inet loopback
iface eth0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

**/etc/wpa_supplicant/wpa_supplicant.conf**

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
ctrl_interface_group=root
update_config=1
ap_scan=1

network={
 ssid="<ssid>"
 psk=<key>
}
network={
 ssid="<ssid>"
 scan_ssid=1
 key_mgmt=WPA-EAP
 pairwise=CCMP TKIP
 group=CCMP TKIP
 eap=PEAP
 identity="<uid>"
 password="<password>"
 #ca_cert="/etc/cert/ca.pem"
 phase1="peapver=1"
 #phase1="fast_provisioning=1"
 phase2="MSCHAPV2"
 pac_file="/etc/wpa_supplicant/pac"
}

Raspberry PI, Video & Audio Streaming

Requirements :
Using my PI seemed a logical solution = small.  Attach a PowerGen external pack = Portable.  I'm already making progress!

Streaming has been tougher, 4-5 days of tough.  Seemed to settle immediately on VLC and started by using the Windows GUI to build out the command line options.  I ended up with a network stream that VLC client could open.
vlc --qt-start-minimized dshow:// :dshow-vdev="QuickCam Orbit/Sphere AF" :dshow-adev="Microphone (2- Orbit/Sphere AF)" :dshow-size="640x480" :live-caching="300" --sout="#transcode{vcodec=h264,vb=0,scale=0,acodec=mp4a,ab=128,channels=2,samplerate=44100}:http{mux=ffmpeg{mux=flv},dst=:8888/}"
Figured the next step was to reproduce this on Ubuntu before trying the Wheezy in case there was anything odd about the distro.  Ubuntu build resulted in :
cvlc "v4l2://" --v4l2-width=420 --v4l2-height=320 --input-slave=alsa:// --sout '#transcode{vcodec=WMV2,vb=256,acodec=wma2}:standard{access=http,dst=8080/,mux=asf}'
Adding the Mediabuntu repository allowed me to use h264 but the video was awful.

Getting this working involved a great deal of trial and error to find video and audio codecs that worked.

RaspberryPI

Expected to be able to use the same Ubuntu command.  #fail

Again, a lot of trial and error resulted in.
cvlc 'v4l2:///dev/video0:chroma=mjpg' --v4l2-width=420 --v4l2-height=320 --input-slave=alsa://hw:1,0 --alsa-audio-channels=1 --volume=1024 --sout '#standard{access=http,mux=ogg,dst=:8888/stream.ogv}' --sout-http-mime=video/ogg
Big difference here is I stopped transcoding and switched to using the M-JPEG stream from the camera.  as the CPU just wasn't up to transcoding, >90% CPU.

Initially the audio stream volume was very low.  --volume=1024 helped but the actual fix was to use alsamix.

 ./alsamixer
- Change to Cam
- F4 change to capture
- Was set to 0 .. raised to 50

Getting this working in a browser is still a WIP.  The camera outputs:

Video : MJPG
Audio : PCM S16 LE (ARAW)

From what I can tell theres no way to view a MJPG stream in a browser.  Tried various things, including wrapping it all in a HTML5 <video></video> tag with some Apache proxy statements.  Setting mime type (which I left on the working command above).

I have some h264 cameras coming so will see how that works out.

Some useful links that got me this far :
http://www.phillips321.co.uk/2012/11/05/raspberrypi-webcam-mjpg-stream-cctv/
https://www.adafruit.com/blog/2012/11/02/pieye-streaming-webcam-piday-raspberrypi-raspberry_pi/
https://forum.videolan.org/viewtopic.php?f=13&t=74077&start=0
http://stackoverflow.com/questions/7917905/how-to-use-vlc-live-streams-with-html5-video

Sunday, 20 January 2013

HMA! Mountain Lion DNS problem

Having recently taken more of an interest in my online privacy I started to use Hide My Ass!

On Windows the HMA! client works well but on OSX, certainly Mountain Lion 10.8.2 it would launch and connect but nothing would work.

Traced to DNS.  I pass the Google DNS servers (8.8.8.8 & 8.8.4.4) to my clients via DHCP.  The MAC had these and DIG & NSLOOKUP both resolved fine but nothing else.  Browsers, Ping etc all failed.

HMA! Wiki suggested setting the DNS servers directly in the OS.  Which I did and bingo.  Everything works.  So to recap, thats setting the same DNS servers statically rather than via DHCP.

Raspberry Linux & HMA! post to come.  Thats not so easy!

Thursday, 27 December 2012

Raspberry Pi & Samba 4

An early Christmas present to myself in the form of a Raspberry PI, no specific plans for it but just wanted one.

I forgot to order a SD card with the OS installed but the wiki instructions and a 16GB card I used in my DSLR worked fine, up and running in less time than it took to download the .img file.

OS is Debian based and the repository is pretty up to date.  I'd recently read that Samba4 was now a full MS AD viable replacement and I fancied a crack at replacing the core services on my home network, currently fulfilled by my NAS.  Rumblings of a web authentication project at work which MS AD is being touted as the directory of choice (yes really) also gave me a good reason.

Install went fine, the rest not so.  Samba 4 requires DNS & Kerberos as partners in crime.  The Wiki was source code centric and alluded to an internal DNS service.  The repo build included BIND 9, which it took me ages to work out, but not Kerberos, which required an separate install.  Needless to say they didn't want to play nicely.

All in all after a few days playing I threw in the towel.  I've no doubt with some perseverance and cursing I could have got it working but figured that I don't really want something that complex running my house core.  One upgrade and pop, back to square one.

But I do like the PI

Monday, 10 December 2012

Cleaning up BackTrack/Ubuntu NIC's

Copying BackTrack virtual box images around and the network wouldn't start.  I'm Recording the commands here as its not something I do everyday and I will have the same issue again .. Same process would work for Ubuntu.

root@bt:~# ifconfig eth0 up
eth0: ERROR while getting interface flags: No such device

root@bt:~# lspci
Will list all the hardware devices.  Confirm the Ethernet controller is loaded.

root@bt:~# ifconfig -a
Will list all the interfaces known to the system regardless if they are 'UP'.  In my case the card was ETH4.

root@bt:~# ifconfig eth4 up
Worked.

I wanted to clean this up as ETH0-3 are all a result of switching machines and MAC address.

nano /etc/udev/rules.d/70-persistent-net.rules

remove the redundant config and switch your NIC to ETH0

Saturday, 27 October 2012

Did they just reveal Judy Dench/M characters real name ?

Casino Royal
JB: "I thought 'M' was a randomly assigned letter. I had no idea it stood for-"
M: "Utter one more syllable and I'll have you killed"
Skyfall

James Bond introduces 'M' to Kincade the Gamekeeper. He hears Em and deduces Emma.

Coincidence ?

Saturday, 20 October 2012

iTunes and I really don't get on - 100% CPU

If I wasn't attached at the hip to iOS devices I would gladly put every known iTunes developer to the sword.  But I am, and Apple don't seem to be listening to my calls to buy MediaMonkey.

So I have to put up and shut up (well, not so much shut up).

My Windows 7 install of iTunes has been misbehaving. Very slow to start and between it and the AppleMobileDevice service stealing 100% of the CPU. Killing the AMD service made the machine usable, just.

Eventually looked into the issue and low and behold plenty of reported problems.  And all the solutions were to run 'netsh winsock reset' << Wait, what!, I seriously have to reset the settings back to default?

Apparently different apps clash, Apples bonjour service being particularly susceptible. Getting Autoruns and clicking the Winsock tab only showed bonjour and HMA VPN server (highly recommended) entries. This machine is close to a rebuild when Windows 8 is on general release so what the hell.

Ran the command, reboot and bingo. iTunes restored, HMA Winsock entries gone, but HMA still working (?)

Did some more reading and it seems any type of networking problem will lead to netsh winsock reset. Not sure after, cough, many years of being in tech I've never come across this before.

netsh winsock reset = swiss army knife of Windows.

Monday, 17 September 2012

My CISSP Journey.

On Sunday 16th September 2012 I sat, and passed the CISSP exam.

I won't say this was an entirely pleasant experience and certainly not something I plan on repeating anytime soon.  For those that maybe don't know the CISSP is a gold standard professional qualification for security in the business. Made up of 10 domains, each a 'common body of knowledge'.  An inch deep and a mile wide is the phrase often associated with this qualification.  Not sure I agree, at times it felt like a mile deep and I was standing at the bottom looking at a tiny shard of light.

My journey began in March 2012 when I decided to pursue the qualification and so ordered what some people review as the holy grail of resources, Shon Harris All in One CISSP Exam Guide (5th), commonly known as the AIO.  This is a behemoth of a book!  The instant it arrived I sought out a PDF which was hooked up to GoodReader (highly recommended app) on my iPad.

Around mid April I had finished chapter 3.  A seriously PAINFUL read. Security Architecture & Design.  I was ready to quit on more than one occasion.

By this time I had stumbled across plenty of decent online resources. cccure.org, with its excellent test engine (pay up people) and forums and a blog by Richard Rieben.  I emailed Richard to say thanks for taking the time to post and in return he kindly volunteered as a resource if I had any questions.  Which of course I had loads, I had inadvertadley gained a mentor.  And what a valued resource that was!

Richard recommended some other, lighter reading.  CISSP Prep Guide: Mastering the 10 Domains. A pretty old book but bang on the money for a resource.

I had set my self a target of a summer exam and was also considering taking the (ISC)² seminar along with self learning.  I (somehow) managed to get my employer to commit to paying for the bootcamp at Firebrand.  Listening to the reps you could just turn up with zero knowledge, 7 days later walk away with a CISSP.  Yeah, maybe, but not me.  I wanted that course to be the icing on the cake and so continued with the self learning.

Skip forward 6 months, a number of freak outs, LOTS of reading (I could easily have put in over 1000 hours!), NIST docs, watching videos, making copious notes, 2000+ questions taken on cccure and the day of the class had arrived/snuck up on me rather sharpish.

When the chit chat started it was quickly apparent that I was the guy who had done the most prep, most owned a book but gave up.  I was shocked, but not as much as they when the class started at 100mph.

I was booked on the Boot Camp, 6 days with the exam on day 7.  An all encompassing learning experience which started on the Sunday we arrived.  Dennis Griffin from (ISC)² would be our tutor and I can still hear his US Southern drawl, and I heard it all through the exam, like a mini Dennis sitting on my shoulder.

When I got the (ISC)² Seminar book I quickly realised that every piece of CISSP material out there, most of which I had consumed, including the Official (ISC)² Guide (OIG) had loads more info than the class, to a much greater depth and much of the information was out of date.

We stormed through the domains at an average of two a day, nothing we tackled was new to me and I started to feel comfortable,  maybe too comfortable.  As the test day approached I started to wonder what would happen if I failed.  Where would I go? how could I improve? I knew that content inside out.  Shortly after that realisation I started to freak.  Thursday PM I left the class early and just went for a long walk.  I think the whole week was starting to get to me, 14 hour days and reading before and after class was exhausting.

I followed the instructors, and Richards advice and packed up learning Saturday afternoon.  Went for a walk and had dinner at the local golf club, went back to the room and watched a couple of movies.  I slept surprisingly well considering every other night had been restless.

Sunday, test day.

The class had agreed a 09:00 start.  We were all ready by 08:00 and the guy said we could start if we wanted.  We all did.

The next 5 1/2 hours are a bit of a blur.  We'd been told the average time for the CBT test was 3 hours and the fastest 56 mins!  I already had my test plan, 50 questions, break, pee, drink, start again.  Any question I wasn't 100% on I would flag, note down and come back to.  The first time I looked at the clock 50 questions had taken me over an hour and my review sheet was growing at a horribly fast rate, a 6 question flag run somewhere in the 80s had me on the back foot.

I hit 250 at about 4:45.  Had a break and started to review the 50-60 I had marked.  Some immediately jumped out, the majority not.  By 5:30 I was clock watching and knew that I was mentally breaking so just started doing the remaining questions as if it were a practice test.  Maybe that helped with the stress and I hit some correct choices, I actually didn't care by then.

I ended and went through the multiple 'are you sure ?' boxes and walked out.  I felt shit, I didn't think I had a pass but also not a fail.  I was broken.  Nothing came out of the printer, "did you press exit?" - bugger.  The procture went and ended the test, the printer started, paper came out.

Congratulations! I made her read it twice to be sure and then did a little dance, really, I did.

My thoughts

If I were to advise someone who was considering this journey then I would strongly suggest they get hold of a Seminar course book, beg, borrow, buy, steal.  Get that book.  Read it and use it as the template to branch out using other resources but staying within the confines of the book.  If you can, take the seminar, if you go for a boot camp/fire hose then do the prep.  Don't turn up and hope to be drip fed.

The exam

The CBT actually worked OK and I think I would have preferred to a paper test.  Marking questions for review, navigation etc all works well and they even provide an on screen calculator if you need.  Once you get to the end you are prompted with marked questions, you can either review them individually or in order, unmarking as you go.

Half way through the exam I actually realised that even if was an open book exam it would probably only give you 20-30 questions, max!  The questions are clever and conceptual, wanting you to understand the question and apply knowledge and judgment.  Some questions had the obligatory 4 right answers but none of them were tricks, no double negatives.  Just a fair few WTF does that mean! In general I didn't have a problem with the way they worked but did find the majority of the 'scenario' questions rather pointless.  I of course can't give any details but came up with this analogy.

"You have been hired as a chocolatier for XYZ.  They make dark, milk, white, buttons, bars and eggs.  They have recently acquired a new company who make biscuits, they are moving to a bigger premises and will be recruiting 100 more people in the next 6 months.
The CEO has declared they need to :
  • Increase domestic growth 
  • Move into new markets."
 What is the main ingredient in chocolate?
  1. coca
  2. milk
  3. sugar
  4. paper
The actual question stands alone, the scenario mostly just kills your time.

What to learn

The Seminar book and everything else has lots of content.  Know this, but, and this what everyone says, understand it ! (see comments about open book)

The material I used:
My thanks to Richard and my long suffering girlfriend.  (in that order, she never reads my blog)

Wednesday, 8 August 2012

iTunes/AppStore default apps assigned to another user

The iLife apps which now come with Lion+ are by default 'assigned' to the first iTunes user that logs in to the MAC and updates.

Which is fine if its your personal device but not fine when its a business one and that initial user leaves the company.  Apple provide no method to re-assign apps and the no media mandate means you can't re-install.  Essentially your broken and can't update any of the apps.

Spent a good deal of time on Google and eventually emailed support - who were crap.

Resorted to calling Apple Care and after the first tech tried to send me back to iTunes support.  With some perseverance I got through to a senior tech (who wishes to remain anonymous). He took the details and totally owned the case.

Came back within the hour and said we could switch the registration from user XYZ to ABC.  We registered ABC, which will now become the business iTunes account and had the apps switched over.

At first this failed but a suggestion to delete the apps and then re-install resolved that.

Happy user, and really happy with the way Apple support dealt with this.

Monday, 9 July 2012

Chrome 20 - VERY laggy

After an Ubuntu 11.10 update my Chrome became utterly un-usable.  I mean really laggy when typing and scrolling.


Found a forum post that suggested the problem is a recent Flash update.


http://askubuntu.com/questions/161228/chromium-input-fields-lag/161539#161539

  1. Install the adobe-flashplugin package
  1. In Chrome, go to: chrome://plugins/
  1. Disable the /opt/google/chrome/PepperFlash/libpepflashplayer.so instance of Flash Player
  1. Restart Chrome

Problem fixed, at least for now.

Monday, 11 June 2012

Change the Grub Boot Order Ubuntu


GRUB can be configured using the /etc/default/grub file. Before you make any changes to it, it may be a good idea to back it up by creating a copy:sudo cp /etc/default/grub /etc/default/grub.bak
You can restore the copying the backup over the original:sudo cp /etc/default/grub.bak /etc/default/grub
Open the file using the text editor with root privileges:gksu gedit /etc/default/grub
The line GRUB_DEFAULT=0 means that GRUB will select the first menu item to boot. Change this to GRUB_DEFAULT=saved . This change will make it easier to change the default item later.
Save and close the file. Run this command to apply your changes to GRUB’s configuration:sudo update-grub
The configuration change we made allows the grub-set-default and grub-reboot commands to be used at any time. These allow you to change the default boot item permanently or only for the next boot, respectively.
Run grub-set-default or grub-reboot (with sudo) with the number of the menu item to boot (the first item is 0). This command will change the default to the second item:sudo grub-set-default 1
In the screenshot above, Windows Vista is menu item 5. If you want to select an item from a submenu like Previous Linux Versions, you can specify the position in the main menu, followed by a greater-than sign (>), followed by the position in the submenu. You can also name an entry instead of giving its position. There’s a Forum post about how this works. The Ubuntu Wiki also has more details on configuring GRUB.

Friday, 18 May 2012

VMware, a changing beast .. VMUG 17/5/2012

Attended the London VMUG yesterday which turned out to be a particularly interesting agenda.

UG have previously been very infrastructure specific. Servers, hardware, software releases, storage, heated network debates and third party vendors touting their wares.

This one had a slight deviation into some of the new VMware initiatives. First a short, but nice presentation on vFabric and vApp. Both things I'm interested in due to my employer recently jumping on the VMware/Spring stack. tcServer, Gemfire and even big brother EMC rolled in with a couple of Greenplum racks.

The second break from tradition was a slot given over to Neil Mills (@MillsHill_Neil) of Mills Hill Recruiting.

The pre-amble by Simon Gallagher (@vinf_net) very much set the tone on how the traditional VMware geek needed to, or should start to become way more holistic in their approach to technology and solutions.

Neil followed this up with his presentation setting the vision for the 'Cloud Angel' and the next 5 years.

Cloud Angel: Someone who's no longer constrained by the silos of technology, someone who's adept with the cloud harp and can transfix the business with their melodic musings of an automated data centre with apps and services flowing from the fingers of devs out to consumers with no more than a man and a dog.

Being a VMUG it was a VMware centric presentation, VCP will be/already is expected, push your VMware and storage, VMware and security, VMware and <insert technology>, you get the idea. On the journey home this poked a few brain cells.

Together the two presentations got me wondering about the future and how much of an angels tool box will contain the VMware we know today, 5 years is a long time.  Can they continue to be this dominant? I think something missed, maybe for controversial reasons, is to consider some of the alternatives.

VMware acquisitions are taking them up into the app layers while the new kids on the block are starting to get rowdy, OpenStack, Piston, Eucalyptus, AWS etc.  The trend is also towards infrastructure as a commodity, vBlock, Flexpod, wheel it in and wheel it out, bursting into public clouds, moving into public clouds.

Its the new things that will keep them in this market, I guess that's an obvious statement! But will it also become the direction at the expense of the more traditional. vApp is a great example that just happens to run on vSphere, but does it have to?! I wonder if theres a secret lab where virtual pixies are plugging the next generation of solutions into other API's and one day we'll might even see vSphere being given away.

VMware were undoubtedly a game changer in this industry but we're starting to see the next evolution. It will be interesting to see how they, and the Angels adapt.

I hope they do, I already buried Netware and that was traumatic enough.

Wednesday, 16 May 2012

I can't click on the flash plugin 'Allow/Deny'

Ubuntu, Chrome & Firefox both had this.  IM app tried to invoke a flash URL to launch a video share service.  The Flash 'allow/deny access to your camera' dialogue box would pop up but nothing was active, no clicky mouse.


Go to :

http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager09.html

and it displays the flash global settings for your workstation.  Set the site you want to allow to 'don't prompt' and all is well.

Tuesday, 1 May 2012

Backtrack Chromium

Chromium can be installed from the repo's but by default won't run - root error

Follow the steps below :

1. apt-get install chromium-browser
You can also use synaptic and select the chromium-browser.

2. cd /usr/lib/chromium-browser

3. Replace geteuid to be getppid using hexedit with the following command :

hexedit chromium-browser

Then press tab to switch to the mode string. Then press ctrl+s and type geteuid. Replace geteuid to be getppid then press ctrl+x to exit!

4. Finished

EDIT 9/11/2012

This same process works for the Google Chrome.  File to edit is /opt/google/chrome/chrome