Character generation at the cost conscious end of the market.

The main graphics players, VizRT, Chyron have been around for some years now and have been continually expanding their domain to cover MAM/CMS storage at the backend, streaming from live productions, live interaction and statistics as they happen at the events and usually require a large amount of state of the art hardware to build the systems. It is sometimes difficult to separate out just the graphics from this increasing complexity and cost. The new NVG1 box from VizRT and Newtek may come at a price point that matches Brainstorm/Aston, Ventuz, or Ross Xpression, but it is still more expensive than smaller companies are able to manage. Luckily, the power of modern hardware is raising the quality level and abilities of lower end character generator software.ChrWorks

There is increasing demand for the simple straps and score bugs in the budget production area, where the income is never large enough to cover the cost of these higher end graphics systems. These productions only need a few specific templates and they don’t need the user to carry around some behemoth of a graphics PC filled with video input/output cards. There are a few systems around that are adequate for stills and very simple graphics – Wirecast, Tricaster LiveText or even the built-in graphics in the popular vMix software vision mixer. Simple styles may be added with image selections, font choices and color variations. These give basic facilities for a few hundred pounds, but they don’t add that high-end gloss to the production and they are quite manual in their operation. Continue reading “Character generation at the cost conscious end of the market.”

Advertisements

Ping Pong Championships

Returned this year for a repeat of the Ping Pong excitement at Alexandra Palace, but just over two days, so busier than ever. There was also a draw in the middle, to make it more interesting for the players and more work for us. No budget though. We were able to iron out some of the problems in the interface from last time and add a few graphics to make life easier for the operators. Things like the history for a player through the competition,

PPHistory

Also produced a set of data fo Firebase in the cloud and a React.js page to display, live on a web page. Learnt a bit more react and it looked good, so very happy with the result. The full page can be seen at  https://pingpong-32ba1.firebaseapp.com/  but these pics show the style of the groups,

PP2017Group

and the knockout stages,

PP2017KO

Used Trello again to manage the project. Made it a bit easier, since only had to check what we did last time. Clear all the checkboxes and do all the tasks again! Using the automated system to do my burnup, which ended up looking as below,

PP2017Burnup

US Election graphics

We produced applications to allow us to take a feed from the remote DB team, who produced the result input side of the operation and to drive the VizRT lower thirds scene from News graphics. We did this in two parts, mainly. Firstly, a C# app that listened to the feed and gave an API to drive the Viz through shared memory commands. This app then talked to the web interface used by the operator to control the content on the lower third. It controlled the visibility and timing of each of the three sections – the current count of seats, votes, the vidiprinter display of the results coming in and the block showing which states were due to declare in the next time period. The graphic shows the layers on the Viz scene.

ElectionLayers

We wrote the web interface in React as News graphics are moving to that standard for their other applications. We decided not to stretch our learning to include Redux, but I think we would use that next time, as the application complexity became more involved than first planned. It worked well in the end, although we had the usual confused conversations about corrections and such from the results team. As they were all new to the area, it seems that all the experience of the team was lost as they moved. Perhaps it will be easier for the next election, now that they have this experience. Lots of chat on Slack helped, but we should have had a more consistent set of tests. The C# app used Swagger to produce the API and that worked really well. It can produce a mock interface before the real data is ready, so can be ready for early tests before the full implementation.

Trello was used to track progress. The diagram is below. It shows a slow start, but I was doing Ryder Cup work til week 39 so bear that in mind. Once working on the project, we had steady progress at a good speed. Producers became involved about week 42, hence the spike in requirements. We probably should have limited some of these, but pushed for as much as we could fit in and I think we covered more than they used on the night itself.

Election trello

There was also a small node.js app to monitor the twitter feed from the neverno cloud. NeverNo gave some CORS errors, so I used node.js to both get around that and to simplify the feed into the web app.

Chat server with node.js

I added the code for a chat server to the GitHub repository. It’s a simple server that someone wanted as an example to load to Azure. The main software is running an express server with socket.io on top. Message formats are JSON rather than plain text. The src file for the socket.io in the web page is served from the socket.io server on the express server. Code is below…

Continue reading “Chat server with node.js”

At the Intel Labs with Edison and node.js

Overview

I spent some time at the Intel Labs in Swindon playing around with the new Edison credit card sized PC and the Real Sense camera. The original plan was to produce a motorized platform to carry a camera. Image processing software could then watch for and track the movement to keep it in the center of the camera view. This was perhaps asking to much in the short time we had available in the Intel Labs at Swindon, but this will show what we managed to get working and the thoughts we had for future adaptation.
I was using the Intel Edison Module and Arduino Breakout Board. In addition I added the following items,
 
2 x Stepper motors, with their driver modules
1 x Joystick, to be used for setting the centre points
1 x push button
1 x LED to show the system had booted correctly.
As you might see from the photographs, the motors are mounted in a speedily built link to hold them 90 degrees to each other; one for horizontal rotation, one to tilt the platform. This was definitely not a production model!
mainboardmotors

Continue reading “At the Intel Labs with Edison and node.js”

VizRT days Bergen

Flying out of a sun-bathed Bergen, following a few days of very pleasant company and interesting technical discussion at the VizRT days. The weather has been largely dry but it was a bonus to see the sun as we left. Very picturesque.

The conference followed the trend of NAB last month; lots of video streaming over IP with a movement of the data to be file based from camera to transmission. Content management systems are needed to cope with the amount of video and stills being produced and the output needs to be sent to a myriad of devices. Mostly HD content but the viewers devices range from TVs down to mobiles, with the growth at the smaller end of the range. The graphics are becoming more flexible, with the ability to insert metadata information and timing at source and for the metadata to be interpreted at the output stage when the final video player format has been decided.

The conference highlight was, of course, the Sky team led by Martin Stanford who led the effort to wake everyone up after the late social session the previous night. Sky are active in so many ways and lead the way with their mobile and streaming applications. The News workflow and anchor support with iPad and mobile was very slick.

Bergen_team

Continue reading “VizRT days Bergen”

NAB Social streams

There were social feeds to be seen throughout all the halls at NAB. It seems to be a requirement to engage with your listeners and viewers. Some people have stayed with a simple twitter feed, some have brought in feeds from Facebook; more recently all the integrators are expanding to bring in multiple feeds, adding graphics from the likes of Instagram. Twitter is by far the most popular and active stream for text messages.

There are two main problems when harvesting messages. The first is being able to connect to the feed in a way that you can handle it, the second is to be able to pick out the best items from the feed and not let the bad stuff through. The Twitter “firehose” can pour out huge amounts of data that most clients cannot handle, hundreds of millions of messages per day. For this reason, Twitter only allow normal connections to read up to 10% of the messages and only in batches of a few hundred. They have their trusted distributors that do have access to all the data, and who then can give clients a more tailored feed at a rate that they are able to handle. These distributors like DataSift, Gnip, MassRelevance can sell services and also run algorithms to give trending and other aggregated data. Other sites such as Topsy give an interactive interface for accessing instantly. There are a growing number of specialist companies to provide aggregation services. Because there are so many variations, there are a variety of solutions and a variety of pricing models to choose from. A more full list of integration services is shown here.
Continue reading “NAB Social streams”