Excellence in entrepreneurship: an audience with Google

I’ve just returned from an excellent talk organised by Cambridge Judge Business School’s Entrepreneurship Centre. Simon Hall hosted a fascinating discussion with Google leaders Jonathan Rosenberg, Senior VP of Alphabet and former Senior VP of Google Products, and Alan Eagle, Director of Executive Communications at Google.

The talk was primarily about the value of coaching, and was celebrating the life of Silicon Valley luminary Bill Campbell (who I admit I did not know), business coach to Steve Jobs, Eric Schmidt and others.

The event was primarily targeted at CJBS students though there was a fair crowd of entrepreneurs and local business types, I chatted to a few after the talk.

Two things happened in the talk I wasn’t expecting. First, quite how entertaining Jonathan and Alan were together. They were clearly good friends and had a hilarious double act going on between them. Alan’s description of how Bill Campbell grilled him in a job interview was brilliant as he made Simon stand in for himself while he channeled an intimidating Bill (there’s a good photo of this moment here).

Secondly, everything Alan and Jonathan said was focussed around human values. How important empathy and developing your team is to business success and how coaching is a great way to achieve that. I wasn’t expecting the talk to be focussed so much on how important human-centered values are to business, but I was very glad to hear so much from the two Silicon Valley business leaders that I found myself agreeing with.  

Alan retelling the story of how he was interviewed by Bill Campbell

A few notes from the talk…  

Interview as much as you can, good to learn how to tell good people.
A good question to ask about experience: What did you learn from this?  

If you have to let someone go do it with dignity. Why? Because it’s the right thing to do. Because it will affect the rest of your team if you don’t. Because it’s a small world, people who you let go may be future opportunities / they may talk about your business.

5 elements of a successful team:

  • Safety
  • Clarity of goals
  • Respect
  • Big mission that matters
  • A meaningful role

Find out more at Project Oxygen  

An effective manager needs to marry the principles of coaching with management.  

Important future tech skills (Alan repeated this a lot):

  • Computer science
  • Machine learning
  • Data

Soft skills:

  • Passion, interest in learning
  • Smart creative dedicated to learning
  • Good communicator, concisely make your point, speak with passion

Important for Google’s success: Speed & simplicity  
All sorts of latency exist so speed really matters. Fast results makes people come back.

The concept of no managers didn’t really work for Google. We had this for 18 months or so. Asked team and they wanted someone to mentor them & take decisions.  

Engineers need a career ladder to rise to the highest level in a company without having to be managers. If you’re the most senior engineer the impact you can have on a tech company is profound & you should be paid the same / more than managers. Not many companies do this.  

Guide & lead, give people freedom. Don’t micromanage.  

There was a question about whether too much growth is bad. Seems not. You can hire brilliant people off the back of fast growth. With an internet business you can often support fast growth. Don’t worry about getting everything right / perfect.
Great quote: “If everything’s going right you’re not going fast enough”    

You can read more in their latest book, Trillion Dollar Coach, available from all good bookstores. I’ll be enjoying the copy I bought this evening!

Adding a Staging environment to Symfony 4

Environments in Symfony

We use Symfony a lot at Studio 24 for building modern web applications. Our normal setup is to have a local development environment, a staging environment for clients to test code, and a production live site.

By default Symfony supports the following environments:

  • dev – (development) intended for development purposes, can be used locally or on a hosting environment to test your application
  • test – (automated tests) intended for use when running automated tests (e.g. phpunit)
  • prod – (production) intended for the live web application

Ideally we would have a third web environment to represent staging, which is what we use to preview functionality before go-live. So that’s what I set out to do.

Adding a custom environment

I want to call my new environment stage to represent staging, since Symfony already uses shortened versions for other environments.

It turns out you can just add any old environment name and Symfony recognises this. So setting the new environment locally is really only a matter of updating your local environment settings file .env.local (you can also set this via actual server environment variables).

# Website environment
APP_ENV=stage

Environment configuration

Symfony loads environment variables from .env files. It uses the .env file for default values, then loads the .env.{environment} file for environment-specific settings. Finally, it loads the .env.local file for sensitive variables (e.g. API keys or database credentials – this file should not be committed to version control).

To help keep track of my staging environment variables I created a file at .env.stage to store these.

Package configuration

Different packages use YAML config files in the config/packages folder. I created the folder config/packages/stage/ which is used to store package configuration for the stage environment. It’s possible to inherit values from another environment via the imports key, which is really handy. Here I’m importing the prod settings for the stage environment.

# config/packages/stage/monolog.yaml
imports:
- { resource: '../prod/' }

Composer packages

One gotcha is your PHP code may depend on a library that is loaded by Composer locally in your dev environment (via require-dev), but does not load on stage or prod.

When I first tested the above code, it crashed since Monolog was not found (this is used for logging). It turns out Monolog was loaded in my local dev environment via symfony/debug-pack which is setup to only install on require-dev in my composer file.

This simple composer require command quickly fixed it!

composer require symfony/monolog-bundle

Debug mode

Debug is enabled by default for dev and test environments, and disabled for prod and any new environments.

Normally I’d recommend not displaying debug mode for a staging site, since it is supposed to be an environment to preview the site and should work in the same was as production.

However, you can enable debug and the Symfony debug bar for your new stage environment. First set APP_DEBUG in your .env.local file:

APP_DEBUG=true

Next, ensure the debug bundles are enabled. Edit config/bundles.php and ensure the WebProfilerBundle and DebugBundle are both enabled for the new stage environment.

    Symfony\Bundle\WebProfilerBundle\WebProfilerBundle::class => ['dev' => true, 'test' => true, 'stage' => true],

Symfony\Bundle\DebugBundle\DebugBundle::class => ['dev' => true, 'test' => true, 'stage' => true]

Finally, create the following config files:

# config/packages/stage/debug.yaml
imports:
- { resource: '../dev/' }
# config/packages/stage/web_profiler.yaml
imports:
- { resource: '../dev/' }
# config/routes/stage/web_profiler.yaml
web_profiler_wdt:
resource: '@WebProfilerBundle/Resources/config/routing/wdt.xml'
prefix: /_wdt

web_profiler_profiler:
resource: '@WebProfilerBundle/Resources/config/routing/profiler.xml'
prefix: /_profiler

Summary

That’s it! It turns out it’s very easy to setup new environments for Symfony 4, most of the work is in enabling any bundles you require and ensuring the right config files are setup for your new environment.

The front-end of Headless CMS

I wrote a blog on what Headless CMS is all about  on my agency site back in December. Partly to help explain to our clients what this technology is all about, partly to help crystalise some of my thoughts on the subject.

I’m currently embarking on an interesting project to build a set of tools to build a front-end website in PHP based on Headless CMS technology. My aim is to open source this front-end code, though we need to prove it first on a few projects.

I thought I’d blog about my experience and findings here. The following is a short technical focussed intro to all this. Continue reading “The front-end of Headless CMS”

OpenTech 2017

I made my first trip to OpenTech yesterday hosted at University College London. I didn’t really know what to expect, I’d spotted the conference on my Twitter feed and I understood it to be a day full of discussions on open data, technology and how they contribute to society.

I was impressed. It was a busy and passionate conference, full of people who work with tech trying to make a difference to society, making it more open and fair, against a challenging and often unhelpful world.

My day started with Hadley Beeman, a member of the W3C Technical Architecture Group. Hadley’s talk was on “Standards for Private Browsing,” she explained how user expectations of how private browsing works differs from how browsers actually do it. Some US research stated the most popular reason to use private browsing mode is to hide embarrassing searches, however only Safari hides recent searches. Not helpful for users.

The concept of private browsing needs standardisation, not only to help user’s expectations about how their data is stored but also to help people build technology and be confident about how private mode will work. With the rise of Web Payments this is only going to become a larger issue. Hadley said more user research is needed to help in this area.

Rachel Coldicutt followed on with a passionate, excellent talk about Doteveryone, the think tank that is “fighting for a fairer internet.” Rachel gave a good overview of how Doteveryone is trying to improve digital understanding for everyone by focussing on education, standards for responsible technology, and stimulating new value models.

She talked about the rise of power of the big four “GAFA” (Google, Apple, Facebook, Amazon) and how these companies wield much unaccountable power on the internet today. With a government, if you disagree how things are run, you can revolt, not so with Facebook. She revealed 7 developers are responsible for the Facebook timeline algorithm (just 7!), a technology that is becoming bigger news with how it’s seen to have influence on recent political decisions. She also raised an interesting idea around a “fair trade” mark on the internet and how that could work.

The next session was by Anna Powell-Smith who talked about an offshore property ownership project she worked on for Private Eye. She worked on pulling data sources together to build a map of properties in England and Wales owned by offshore companies. Offshore ownership is problematic because it’s used for tax avoidance by those with often dubious means of making money. Anna told an interesting story of how she matched FOI requested data up with the INSPIRE dataset (important, but restricted, geo-spatial data on properties), a process that seemed pretty convoluted and difficult but was successful. The Private Eye report was discussed in parliament and it looks like the government are starting to make some positive movement in making this data more available.

However.. Ordnance Survey are legally obliged to make money out of their data so they are not willing to make this completely open. The critical component Anna used in her research, matching the INSPIRE ID to title IDs is no longer available without spending £3 per property, which makes it cost-prohibitive.

The government has put this requirement on Ordnance Survey to sell their data rather than make it open. Anna made a call for any economists to help make the case for why this data should be free and how it would have a positive economic impact in the UK. If anyone can help contact Anna at https://anna.ps/

The next speaker was ill, so John Sheridan helped out with an impromptu talk on his work at the National Archives. This was fascinating, touching on the different challenges between physical and digital archives, how context is important in archived data, how copying is a core part of digital archiving (“there is no long term storage solution for digital”), how this also requires validating the data you have stored is still the same (they use hashing methods to help with this), and how you need to understand the data you store so you can also provide a means to view it. The general message was data encoded in open formats is easier to archive, and to make available in the future.

John also touched on the UK Web Archive project, run by the British Library who have a digital archive of around 3 petabytes, most of which is not published online mostly for copyright reasons. While the US-based Internet Archive has a policy to publish first and takedown content on request, as a UK public institution the British Library and National Archives have a lower appetite for risk for potential legal action — and therefore only publish when they have permission to do so.

I chatted to John in the bar after the event and he explained that the National Archives takes responsibility for archiving all government digital content, taking snapshots every 3 months or so. The Web Archive project deals with UK websites. I asked him where a past project we worked on would be archived, the Armada Tapestries site for the House of Lords. Apparently this is taken care of by Parliament itself in the Parliamentary Archive. Lots of people archiving things!

After lunch I joined the Post Fact / Future News panel which turned out to be a real highlight of the day.

James, Wendy, Becky and Gavin

The speakers were James Ball, Wendy Grossman and Gavin Starks and the panel was hosted by Becky Hogge.

James started proceedings and talked eloquently and in detail explaining the difference between Fake news (an outright lie, not so common in the UK) and Post-truth bullshit (manipulation of an almost-truth) — basically where we find ourselves today. James talked at speed and with confidence and painted a fascinating, dark picture of how news is being manipulated for political ends at present and how a good narrative can often trump a complicated truth that is difficult to explain to the general public.

James made a great point on how you “can’t use technology to solve cultural issues” and that “fake news is not an internet problem.” He highlighted the problem is in society already and in figures such as Boris Johnson who have a long history of manipulating the truth for a political agenda. He’s written a book on this topic, so go buy it: Post-Truth How Bullshit Conquered the World!

He also noted we need to “think about the business of the internet,” the idea of business / value models cropped up a few times during the day, a lot of the issues we associate with the internet are exacerbated by how the web makes money — alternative models need to be found to help improve the current state of affairs.

A very funny Wendy on what today’s nine year olds may think about future society

Wendy then moved onto future news. She talked about predictions she made in 1997 and how many of these have some truth today. She went on to explore what younger generations will think about technology and society and what are future headlines likely to be. Wendy’s talk was fabulous fun.

Gavin began his slot by reading out a written statement by Bill Thompson who was due to speak but was otherwise waylaid at the Venice Biennale! Gavin read out a short piece by Bill on the rotten state of the net at present. It made for a sobering interlude to the discussion.

Gavin then moved on to talk about the work he’s been involved in to make the internet more open. The Open Banking Standard, an anti-slavery corporate statement registry, and tracking the origin of products through the supply chain.

He talked about how we now need to up our game more, how the community thought the case for open data was won but this is not currently the case.

Gavin is currently interesting in creating impact@web-scale, trying to tackle solvable problems in the UK between policy and technology, bringing public and private sector together. He’s looking for people to help, you can sign up at http://www.dgen.net/ or find out more on his blog.

I’ve probably written too much already, but the rest of the afternoon was also enjoyable, peppered with public interest technology, Ada Lovelace Day(celebrating women in STEM), using climate change data to make a symphony, electrocution for fun and profit (and education!), using neural networks to help map happy places, what the Open Data Institute is up to, and a few beers in the union bar.

By the end of the day, my head was full of ideas, problems and a better understanding of what people are doing in the area of open tech. I learnt a bunch of useful things that I can takeaway for my day-to-day work, and will get me thinking about ways I can help make a difference and contribute to a better, more open and responsible technology.

Finally, a shout out to Kevin Marks who as well as live tweeting most of OpenTech also wrote a whole bunch of interesting notes.

What I’m reading in 2017

It’s fair to say I read a lot. I love books and always have a few stacked up next to my bed for a quick (or long) read before I fall off to sleep. I also love books for work, since although there is a huge amount of resources on the web, published books distill expert knowledge, are peer-reviewed and are a great way to get a good overview of a particular subject.

Continue reading “What I’m reading in 2017”

Viewing images on the command line and the “No identify available” error

I’ve been testing a website that generates images on the fly and in the past had used the less command to view the file contents, this helped see when PHP errors had unfortunately made their way into an image file.

However, sometimes when viewing a file I got the following error returned:

No identify available
Install ImageMagick or GraphicsMagick to browse images

I’m pretty sure I worked this one out a few years ago, but had obviously forgotten. Turns out you can’t view binary files via a command like less!

The right way to view an image file is a command such as xxd. To view the top of a file (which usually points in the direction of the file format) use a command such as:

xxd /path/to/file.jpg | head

This command works just as well for text files, so will still pick up if PHP errors are inside the image file instead of the correct binary data.

Saving this one for later so I don’t forget again!