Friday, 1 May 2020

My photography workflow 2020

Setting up a "workflow" or an organisation to your work, is a key part of moving from a beginner to immediate kill and above. So a bit about my photography workflow since I am slowly retiring my old Mac Mini and OSX Snow Leopard and moving to an old Windows 10 desktop. Just before COVID-19 turned the world upside down I had also got a new camera not supported by any of the RAW converters I was using. 

Shooting


For my veteran Canon 450D I have a couple of styles setup from Cinescopophilia shooting RAW + JPEG. I have in the past sometimes shot RAW only, but it was a pain if software didn't support it. But now even my phone can process the CR2 RAW files!




On my X100 and X-T100 mirrorless cameras I have started to try and follow the tips from In Camera: Perfect Pictures Straight Out of the Camera by Gordon Laing with general settings from Fredrik Averpil and Film simulation settings on Fuji Weekly The X100 is also another veteran now, but being a premium camera when launched it has 3 custom settings compared to the newer entry level X-T100 that only has one setting :-(

The red thumb grips add a bit of bling and make the smaller mirrorless bodies a bit easier to use. As does the screw in soft release button on the X100. I also use my spare on my Grandad's old Olympus Trip 35 film camera.

Editing and storing

The biggest journey I have been on in photography is my technique around editing. I used to like DxO Optics and the very finely tuned body + lens corrections, but as a hobbyist it's a bit of overkill, the equivalent of a startup adopting the process a multinational needs. 

I have 5 key bits of software I use - RAW conversion to "develop" and polish photos. Library to store, search, and organise. Film effects to give a vintage look, if I'm honest usually to make up for errors in shooting settings ;-). HDR for landscapes or interiors with tricky lighting for a single frame to capture. Finally, Panorama stitcher for when I don't have a wide enough lens to do a scene justice, or to provide a much higher resolution image than I could with a single frame.


From;

  • RAW conversion: Digital Photo Professional (DPP), DxO Optics 6, Aperture
  • Library: Aperture
  • Film effects: DxO FilmLab 3, CameraBag
  • HDR: HDRist
  • Panorama: Panorama Maker 4




To;

  • Raw conversion: DPP 4, CaptureOne Fujifilm express, CameraBag Pro, Polarr Pro (also on Chromebook and iPhone)
  • Library: CaptureOne Fujifilm express
  • Film effects: CameraBag Pro
  • HDR: Skylum Aurora 2018
  • Panorama: Panorama Maker 6


The main switch has been giving up Aperture. It's a shame that Apple retired it, I plan on going back to Apple hardware once finances allow and trying out Gentleman Coders RAW Power as a successor to Aperture. I have been tempted by Lightroom at several points, but the relationship that CaptureOne has with Fuji plus some good reviews comparing them means I'd rather pay for CaptureOne Fujifilm. One minor faff at the moment is using either DPP or CameraBag Pro to convert my Canon RAW files to TIFF, so that I can edit them in the Fuji specific version of CaptureOne. I don't shot that much on it anymore, but I find as my skills improve I'm editing RAW files over a decade old!

Polarr Pro and CameraBag Pro are both good value Lightroom alternatives for the kind of photo editing I do. They also have free trials, so it's worth taking a look.

One addition to my workflow is syncing my output folder to Amazon Photos, plus manually backing up originals periodically. I used to copy all my iPhone photos into the same local file system on my Mac Mini, but now I also sync that automatically. Luckily the GoPro and Canon software can automatically pull photos off the memory card. They then go into source folders that Amazon backup is watching. The technology pretty much just works, and it is one less thing to think about.

Before I realised that Amazon photos stored uncompressed hi-res files by default, I was planning on using Google Photos to go alongside my Chromebook. I've used Picasa since before Google took it over and folded it into Google+, before a linkage with Google drive and then its current incarnation. It's slowly become less useful and if it wasn't a key consumer feature to have, I expect it would already be in the Google graveyard.

Having created order and a system with my photography has been oddly therapeutic. It's also a useful skill to practice! 

Sharing

This is one part that I am still figuring out. I used to use Flickr as my main way of sharing images, with albums on Facebook. Now I seem to be in a cycle of sharing an image (or small set) on Instagram. It feels like the community on Instagram is much more like the community of 2008 Flickr. I also have a small portfolio on GuruShots. So far from their competitions I have got photos that are going to be in exhibitions for galleries in Berlin, Stockholm, and Melbourne. For a hobbyist like me, it helps give inspiration and focus on what to do next.

Comparisons of Lightroom and CaptureOne

In case any Lightroom uses are interested in jumping ship. Lightroom seems to be the "average" RAW converter these days - not the worst, but not the best either.

Wednesday, 29 April 2020

Using RStudio with a Chromebook

Given my interest in using R as an analysis tool and using a Chromebook as my main laptop at home and on side projects for 2 years, thought I'd take a quick  look at some options for combining the two.


Option 1 - Using RStudio on AWS


This has been my main usage of R at home, using RStudio on Windows 10 at work. With a couple of preset images created it's easy to get up and running. I used the image created by Louis Aslett. There are a few alternatives around now, even the Amazon have an article that goes into a bit more depth on Running R on AWS. THis allows you to fine tune what is setup. I have been using this for about a year now and the experience is so seamless on a Chromebook, since everything else is also running as a Chrome app.

Pros: 

  • Use computing power in the cloud, can easily share
  • Consistent experience no matter the computer you use to access it

Cons: 

  • AWS management consoles do have a learning curve
  • Small monthly charge for storage option in the image

Option 2 - On a Chromebook directly using Linux beta

This is one of the newer options. I used Mark Sellor's guide to getting started, although I downloaded the Debian package directly from the RStudio website and needed to run 


sudo apt install libnss3

as there is now a new dependency. One of the surprising things for me about this process was that I remembered how to use Vi well enough I didn't read the instructions until I had already edited the file. 

Pros:

  • Offline access to an R environment, on the go
Cons:
  • Most Chromebooks are under-powered, so not useful for any heavy data crunching!
  • Bit effort/familiarity required with Linux command line

Option 3 - In the cloud!


One of the more exciting options is the new https://rstudio.cloud/ as the name suggests this is a hosted version of RStudio in the cloud. Similar to the AWS option but even less to worry about. I created an account using my GitHub credentials and then imported a project from GitHub and was up and running. No setup that isn't directly related to the code that you want to run. Nice tool-tips that prompt you to install required packages that aren't already in your workspace. Not surprised how well this works given RStudio has had a server version for years, this probably gave them a head start over putting a pure desktop app.


Pros:
  • Really easy to use
  • Hosting all taken care of
  • It's still in beta - so no cost yet

Cons:
  • It's still in beta - so potential disruption to service

Option 4 - RStudio Desktop/Server at home


Speaking of RStudio's sever version ... I have recently rebuilt an old PC for my photography hobby editing. This includes an NVidia graphics card that could also be used for some GPU acceleration, for example TensorFlow for R, if I just use that :-) I think from looking at the three other options, now I have a more powerful setup this will be the most straight forward option! I'm keeping an eye on RStudio.cloud though, as it does save the overhead of version management.

Some of my adventures to date

Wednesday, 11 September 2019

MEETUP: Discovery is a mindset not a stage - creating a learning organisation at ProductTank Brighton

Last night was the latest installment of ProductTank in Brighton. We were lucky to have an overview of continuous discovery from Teresa Torres.

She started off by discussing how discovery and delivery should be more intertwined and not separate phases - this can be a problem with “dual track” as it can be impression it gives. She then asked "Continuous discovery ... are you really doing it like this?" Or is it research with a project mindset?

Getting better week after week



The question to guide setting up product discovery is “How do we get better week over week”


We make product decisions every day, so if we want to be truly customer centric then we need to engage with our customers every week so that our decisions are infused with their input. Need to avoid cognitive bias because we are thinking about product all day every day, check the decisions with what our customers expect. 

Shooting here for co-creating with a product mindset. Otherwise you can drift into a project mindset by developing in a vacuum, then asking customers to “validate” with a short gap between delivery into engineering. The earlier you get feedback the easier it is to change and the more likely you are to do it ... and not rationalise away what you need to change. 

This is not to say customers dictate solutions. We are experts in that. They are experts in their context and problems. This is where we Co create. Two sets of expertise becoming more than sum of their parts. 

She then gave an intro to "Opportunity solution tree" - for more details I'd highly recommend her post that goes into how to build one up.

The key is to create one opportunity tree to target one quantifiable metric. The goals is how we must best achieve a business outcome. Then map out customer needs that we think will drive opportunities will meet that need, and the solutions that deliver on that goal. The goal should be a two way negotiation between product leader with the team, leader to set business context etc and the team to communicate how soon/feasible. 

Her final tip was when interviewing get them to tell stories - “tell me about a time when” - as it is a better method of collecting insight than how/where/when questions that can be leading. She also recommended starting off every customer contact with this kind of question, before then getting feedback on a specific slice on functionality. To make sure that you gain the generative feedback without leading with your assumptions. 

Further reading


Sunday, 7 October 2018

Adventures with flow and transparency

Photo by Sasha • Stories on Unsplash
This is a follow up to my post on roadmaps and themes. I wanted to talk about experience in a B2B context with a platform product and SaaS-style model a bit more. Most articles out there tend to be from B2C or app products.

So about the time I wrote about theme based roadmaps, I was using a combination of spreadsheets, Trello, and JIRA ... all OK for their intended purpose, but all have limitations around use and structure for product people. 

Limitations that possibly are blockers in increasing flow and transparency in the product development process. So, why are flow and transparency important? I think this tweet by John Cutler sums it up



The flow aspect allows feedback and course alteration as new info is uncovered. The transparency aspect allows those that can input be part of the environment that you are creating to make better decisions.


So what do I mean by "flow"

To me it means having a regular and consistent progression of work through to delivery. It means not working towards an arbitrary release date. You have a light enough touch process to deploy through to production without much overhead. This implies having skill and discipline across the team in chunking up issues, to a minimal size, to flow through the system. For me, this was a natural transition from larger platform pieces tied to a theme. Then switching to unlocking the capability that they made available with smaller experiments.

It also means that you need a good understanding of what you are aiming for, to guide the smaller pieces of work and not get lost. To do this I needed a good picture of our options and feedback. Using a combination of spreadsheets, Trello, and JIRA was making this difficult.

Building foundations

The first step was to get ideas in one place, with customer feedback logged in same place to link to the ideas and build up picture of the need - with ability to  share with the team!

This become the place to start to flesh out ideas with high-level problem and value of solution if delivered, much easier to connect the dots. There was also an added benefit - the designs, user stories, and specs can all be held in one place prior to pushing to development management tool, e,g. JIRA, and results of spikes can form part of the idea.

Also now the data is more explicitly structured I can report on it - and daily/weekly emails can help communicate team activity

What happened next

Once all the activity was in one place, organised to try and explain what was going on and available to query on-demand. I learned three lessons:

First, people naturally want to help, but they won't be aiming for your goals or fully aware of your job role. I.e. they won't be doing your work for you ... so, you need to understand what drives them and where common work can help. Then use that for mutual benefit. One barrier is using a different system, so auto updates between systems, or simple logging of interactions, can ease collaboration. 

Secondly, flow has benefits is lack of management overhead needed in coordination, but unexpected events can through it off. E.g. a backlog in process can have a knock on effect that over time causes greater coordination to get momentum back. So, keep an eye on your delivery metrics. Not as targets, as they'll then become vanity metrics, but to track the health of the system. Dashboards that show trends are great for these.

Finally transparency is important for trust, but ensure that your audience understand what you are communicating and why. Be careful of inadvertently over-promising. One area of misunderstanding that can arise here is the different between a roadmap and a release plan.

Thursday, 23 August 2018

Returning to code, worth it for a Product Manager?

The past couple of weeks have given me opportunities to reflect on what I like about my job and previous experience. Partly because we are expanding the team at 15below, partly from doing a bit of coding. I have written a bit about becoming "post-technical" in the past, but now is the first time I have done much code in years.

The thing that I enjoy most is solving problems and helping people. Throughout my career solving business problems to help create positive outcomes has always been fulfilling. Now I get to help do that, then go back and refine the solutions. You don't always get to do that as a developer or in a project focused role.

Side project

Code wot I wrote
The first bit of coding is on my side project. Martyn has created a great architecture and I contributed the project import from LinkedIn (almost) all by myself. It feels to brilliant to code on a side project - you get a sense of achievement from seeing an idea come to life. It also provides a way to express ideas and a method of  collaboration. It's easier sometime to do some rough code and then hand over to a proper developer to complete. It also helps that Elixir is a nice language to use, and reminds me of Prolog.


Work

The second bit of coding recently has been creating a report for work. Slightly different motivation here. I needed some information to share and the developer resource to surface it was better used elsewhere. I had just enough skill to self-serve the report creation and I was still exploring what was interesting and tweaking how the data was tagged. So, it was much simpler for me to prototype what I needed. Once this has stabilised then I expect to handover to get integrated fully into our MIS reporting.

It has been fun using the RStudio environment, again with a nod to previous experience this time Matlab, although the language is different. I have an environment in an AWS instance that I can access via the browser on my Chromebook, and an install on my Windows laptop.


Should Product Mangers code?

At the same time all this was going on, a blog post answering the question "Should product managers know how to code?" popped up. I must admit, my heart did sink a little to see what that might contain. However, I was pleasantly surprised. In particular the tip it gives about learning some CSS and exploring  ‘Inspect Element’  to tweak pages and see how changes will look. Also learning what happens where in your application is a good idea. The best people I know that work in non-technical roles in software understand this.

So, although it's fun I don't think Product Managers need to code. It can be useful in understanding. Or self-serving for quickly prototyping ideas.

Sunday, 5 August 2018

Further adventures in SEO land: Organic search for side projects and startups

This is part of a series about my side project Bashfully, which aims to give graduates and other new entrants to careers a seasoned professional level way of expressing themselves through the super power of story telling. Following the core principles of being discoverable, personalised and guiding in approach.

I have already written about Venturing into the world of SEO where I setup the infrastructure on the site, like metadata and adding a sitemap. Then a little later When SEO meets the MVP process on Bashfully, where I revisited how search results appear and what I hoped for Bashfully.


Photo by Annie Spratt on Unsplash
This has been the hardest part of the project so far. As much as you can setup the structure and hint to search engines, if people aren't finding your site organically then they're unlikely to find it. So, how to get organic searches?


Taking a look in the search consoles the closest searches that appeared were “Bashful github” and "flickr integrations". Which got me thinking, how could our content be better for searching?


One of the things that isn't appearing in searches were comparison to "competitor" or complementary products. So I set about creating some pages to compare Bashfully to competitors. How to pick competitors? The first one had to be LinkedIn as the main player in this space, the second was GitHub with it's place as the "developers resume". The final spot to test out this idea came from doing my own searches for the kind of terms that we wanted Bashfully to appear under .... and the winner was VisualCV.


When writing the comparisons I set myself a couple of rule, first Be positive in introducing competitor. They are popular for a reason. Often not the same as Bashfully’s purpose. They also provide value to someone. The second was actually highlight the features that we liked and wanted ... Ok, maybe not all the features, but the ones we won't reasonably do this year.

As another stream to getting organic search we are doing some work to join and support the open source resume creation community. This will probably be the subject of another post, so won't go into it too much here.


Final step was to update site map, resubmit to Google and Bing to try to get crawled. Next monitor to see if it worked! Then tweak and improve the content.


Further reading

Tuesday, 31 July 2018

Creating a ProdPad progress report in R

Photo by rawpixel on Unsplash
My journey with writing new R scripts had taken a bit of break recently. It started with exploring Text-mining. Then creating my library of reports for Product Management. Although R takes a bit of getting used to, it is like Excel x100 once you do. It is great for repeatable analysis and report generation. Saving a lot of the hassle of the export, format, and save cycle that I was going through.


The problem

When I needed a new report though I took the chance to expand my skills. I wanted an automated report that showed progress in the product process. This was to show the pipeline to meet strategic goals in the product ideas worked on. This should be in a suitable format to share with senior management. An be understandable especially to those outside the Product team.


The solution

I had been manually noting figures from the ProdPad UI when I remembered that there was an API. I took a look and it is a nice RESTful API with a JSON payload, so easy to integrate with. Checking GitHub and there were a couple of projects for ProdPad, but none doing this kind of reporting. A quick google of "json r" later and I had found and installed a JSON deserialisation package (jsonlite). This created my R objects as from the API call.

I could then pull in the ideas, make a secondary call to get the workflow status and any tags that were applied to the idea. I choose to pick three tags to highlight and create variables for which stream of work each idea applied to. These each then matched to a goal from the strategic plan.

Next I added it to my "R for Product Management" project on GitHub. This was along with some explanation to get started for other users.



Expanding insight

Looking down the lsit of APIs available I then saw feedback - and it was in a similar format to the Twitter feed in the Text-mining example - so I created a word cloud on the past quarter's feedback. This should highlight trends in what people are asking for.

To dig a bit further I then used the topic modelling library. This allows you to find word associations with words from the road map. To try and reveal any insights that might have missed by reading them individually.

This is a very early proof of concept, and likely to change a lot with use and feedback from the team. But is part of my long term plan to be more data-informed, where possible.



Lesson 1: using function files

For the first time I extract my functions into a separate file. My former dev reflexes started kicking in. This meant that my main script process and formatting the ideas and feedback was short and easy to read. It also allowed the work to form part of a dplyr workflow, with standard R functions. I haven't put any tests around it yet, probably need to learn about mocks in testthat.


Lesson 2: knitr and kable

This was also the first time I had many raw data tables that I wanted to display in my Word doc. The plain text output is OK for me exploring in the IDE, but not great for sharing with other stakeholders. Kable doesn't have many options, but as a quick way to look like a Word style table it is perfect.

I did have a little bit of an issue here with two chunks of code producing table with a single line of text in the middle. For some reason it merged both tables and inserted the text into a row, words split across the columns. A few blank lines at the end of the R chunks and around the text seemed to fix that.



I hope that someone out there finds some value in my R scripts. Please feel free to fork and send me pull requests with any improvements! I am also always happy to talk to data-informed members of the Product Management community who might be interested in using R.


Further reading

My photography workflow 2020

Setting up a "workflow" or an organisation to your work, is a key part of moving from a beginner to immediate kill and above. So a...