Wednesday 17 December 2014

Quick Technology Guide to Starting a Hedge Fund


While the initial TODO list might be staggering, this article attempts to address the technology components that need to be in place to ensure as smooth a launch as possible. Technological requirements will differ based on numerous factors, but what will not differ is that it is an essential element of achieving success.

Consultancy, Vendor Selection & Project Management

This may sound like a message from the Ministry of the Bleeding Obvious, but there is a fair amount to do in terms of vendor selection, product selection and managing the implementation of all these various elements. This translates to a large amount of time which will be significantly longer if the individual managing the process is not familiar with the current technological landscape. It pays to engage with an independent consultant that can assist in identifying the various requirements and then apply them to the current marketplace in terms of suitability of vendors and systems. There are of course the alternatives of either having a tech-savvy team member take responsibility for managing the process or engaging with a technology partner that is going to feature prominently in your overall technology environment and have them manage all elements of the process. Regardless of which option is selected the individual managing the process should have awareness of: different vendors and their offerings, different systems and possible suitability, nuances of integration with prime brokers and market data feeds and finally experience in co-ordinating everything to work in harmony.

Network Requirements and Infrastructure

The first port of call is to assess the infrastructure requirements, once they have been agreed key decisions such as cloud computing vs. on-premise solutions can be easily made. Certain elements such as data circuits and telephony need to be prioritised due to the length of time these usually take. Following on from these the relatively simple matter of workstations and other peripherals can be decided on. All of the above elements need to be brought together with delivery, installation and testing. Other items that should be addressed at this point are identifying who will host your email and ideally they can also host your website. Your domain name will need to be chosen and registered, the website requirements agreed and the website design company identified and engaged.

Software

The systems that are referred to include portfolio management, order management and accounting software. Due diligence of vendors here is essential and particularly the support capabilities of said vendor, not only that service desk hours match your operational requirements, but that also the appropriate level of service is available at an acceptable cost. Will the vendor be able to service you as a customer both today and in the future? How complex are the systems to integrate? What are their maintenance agreements like and what do they cover? How easy is it to use your preferred market data vendor with these systems? All questions that need to be considered.

 

Data & Research

This is impacted primarily by the nature of the required data and the requirements of the above mentioned software. This includes both market data and market research. Installation times can also vary greatly, so they need to be confirmed.

 

Integration

Possibly the trickiest part where all the software, data and processes needs to come together and be tested. Integration refers to not only the integration between the systems, but all data flow which encompasses the market data, prime broker data, etc...

Business Continuity and Disaster Recovery

These overlap a great deal, but are subtly different. While business continuity deals with the continuation of the mission-critical processes and how the business and its employees continue to operate, disaster recovery focuses on the infrastructure, both hardware and software, necessary to run the business e.g. phones, email and systems. Additionally, the value of having comprehensive procedures cannot be underestimated when it comes to attracting investment.

 

Avoiding Common Pitfalls

Rank requirements in terms of priorities and identify your nice-to-haves early, they will make a large difference in terms of choice. Costs, timings and effectively risk can quickly escalate when looking for the perfect solution.
Do not underestimate the impact of your systems and technology as a whole on attracting investment, this refers as much to infrastructure and software as it does to processes and procedures.
Managing IT effectively requires a specific skill set and an up to date understanding of the latest technologies, whilst many people within the finance sector will have the ability to perform this role to some level, their time is often better spent on the business as a whole. It's time consuming and there are service providers, consultants and contractors that can be leveraged, each of these suited to different requirements. Do you really want to spend days writing VBA code and nights rebooting servers?



George Toursoulopoulos is a technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke software solutions.

Tuesday 25 November 2014

3 Reasons to move away from Excel Development


A spreadsheet is the tool of choice for a business starting up and requiring a quick and flexible low-cost solution to meet a business requirement, however as a business becomes more mature its requirements are likely to change over time and the same advantages can become disadvantages. Below I give you the 3 main reasons to move away from excel spreadsheets, but in my conclusion I also give you a couple of key factors to consider before doing so, because in some instances Excel is still the best tool for the job.

Maintenance is a Mission

Excel macros and VBA code is often initially created by non-developers, usually intelligent and knowledgeable people, but not trained software developers which normally results in something being created that is not only difficult to change, but if it should break, extremely difficult (see expensive) to fix. Whomever coded it initially is most likely the only person who truly knows how it works and every subsequent change will serve to further complicate matters, especially if the changes are undertaken by different individuals every time.

Cheap today, expensive tomorrow

While you don't have software license fees, servers or cloud platforms to pay for you still need to factor in the maintenance costs of supporting the spreadsheet and the more code/macro heavy it is, the more expensive it is. The more ongoing changes that are required, the more expensive it gets. Who is making these changes? If it's not a member of your team then where can you get access to Excel developers for short periods of time on a short notice period and at a reasonable price? To make it viable for software companies, they have to charge accordingly, if they are willing to take it on at all. I have been contacted countless times by businesses urgently needing an Excel developer to fix a business critical spreadsheet. Factor in the new versions of Excel and how that can affect a macro/code heavy spreadsheet and the costs mount up quicker than you can say "here is my letter to Santa".

Security? Not so much

Data within these spreadsheets is often of a sensitive nature, but controlling security and access to this data is challenging. I have heard of many instances where former employees have emailed themselves spreadsheets with financial data or customer lists. This lack of security also impacts data integrity, who has made changes and to what? I have seen further instances of portfolio managers making investment decisions based upon incorrect data because another user made a change to a formula in a cell.

 

Conclusion

There are many scenarios where an Excel-based application is the ideal solution, it can add a great amount of flexibility, allowing a business to get something up and running in a very short period of time and usually for a low initial cost. The points above do illustrate that there are some situations where the cons will outweigh the pros and then it might be time to consider a move away to a more structured product. The great thing about making that move is that Excel is a fabulous prototyping tool allowing the gathering of all the essential business requirements by key users. So not all bad news!


George Toursoulopoulos is a technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke software solutions.

Wednesday 12 November 2014

Managing Software Quality


We all aim to build software with zero defects, but that can prove to be expensive, time consuming and next to impossible if the correct processes, checks and balances are not in place. This short article is to identify the key points in the process to focus on in order to help ensure that defects are manageable.


What Requirements?

They key points in this area is how they are gathered, documented, agreed and just as importantly communicated to the development team. This has a large impact on what the perceived quality of the software and whether it in fact 'delivers', but this has to be balanced with the required timelines. A typical example would be use cases, these for most software builds are essential. Use cases document what the users need from the system and how they will use the system, this in turn educates the developers (which mostly do not have direct contact with the sometimes high profile end users) and QA and in turn increases the changes of delivering software that is fit for purpose. Using a system to manage requirements (documenting, sharing, signing-off, etc..) for larger builds is also recommended, there are many off-the-shelf systems that cater for this. Find the time or risk these end users becoming irate when their requirements are not met.


Speed or Quality?

The amount of time pressure impacts the build quality...fact! Trying to get key features delivered might be more important to the business than how robust the system is in the short-term. That priority also might change over time and what is important is that the stakeholders are deciding what the focus should be on. Some type of systems have no room for defects and in those scenarios having the business putting severe time pressure on their development team will not produce the results they are after. The management of the development team need to be in regular open communication with the stakeholders in order to be singing off the same hymn sheet. This will in turn impact the choices of project methodology (Agile or Waterfall), the testing process and the list goes on and on.


Testing 1,2,3

QA needs to be tailored to the project, but the underlying theme is that if the developers and QA do not have accurate information on how the system features will be used, regardless of how much time and effort is spent, they will still most likely not deliver a polished system. Other key issues for me are environment and ensuring that the development and test environment mirror, as much as possible, the production environment. The final critical element of testing is the structure. The process of producing, consuming and scope of test plans, test cases and test reports should be agreed so that the process of testing is as efficient as possible, saving time and money that can be used in improving the quality of the product.

Measure

The final point that I want to make within the context of this article is to measure the defects within alpha and beta testing and also UAT. This allows a more objective and less emotional view to be taken on the actual quality of the software. Stats that I like seeing are release work items, issues reported and most importantly categories of any reported issues e.g. requirements related, environment related and delivery related. Keeping these stats over a few releases greatly improve the ability to manage the development life cycle. Interestingly we see the greatest impact on the stats to be lack of use case scenarios and this is true across project methodologies.



George Toursoulopoulos is a technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke software solutions.

Thursday 30 October 2014

2 Tips to Improving Information Productivity


Introduction

Information Productivity is an informal term that refers to how efficient we are at organising and storing our information, emails, documents and in fact any content that relates to our work and processes around our work. Many organisations have systems and formalised processes for these kind of activities, but they usually cater for the storing of information in its traditional forms of documents and some very linear processes, while not offering much in the way of how we share and collaborate in real life.

Sharing

This is by far the most common task that we need to perform with our organisation's information, and not all of it is easily catered for. Depending on your industry you could need to share with co-workers, suppliers, clients and sometimes the public and that is not so easily done. Email tends to be what most of us rely on for sharing, but it comes with many disadvantages, namely size restrictions on mailboxes, aggressive spam filters, receipt confirmation, missed emails and the worst of all... 'email time'. Coming back to your desk to countless emails is not something anyone looks forward to doing. Systems to allow the secure sharing of files will greatly increase productivity levels over a period of time, these range from the relatively simple to far more complicated and expensive alternatives. Bottom line is it should be very easy to securely share and collaborate on information without using email.

Collaboration

Sharing might be the most common task, but collaborating is the clear winner on taking the most time. The process of sending information to 1 or more recipients (see sharing), getting their feedback and then sifting through that feedback, giving feedback on the feedback, incorporating the appropriate feedback and then having every recipient eventually agree to the final form of the information accounts for an enormous amount of time. It doesn't end there, how painful is it then when months later one of the parties wants to question a specific point? How do you find who said what, when and why? For many individuals that accounts for a large part of their day to day responsibilities. There are not many systems that cater for this, but as a bare minimum multiple comments from multiple parties needs to be supported, along with an easy way to compare the comments/changes, accept certain changes and approve final 'versions'. Structured workflows are also not the answer for every eventuality, they might be ideal for some tasks e.g. processing of an invoice, but for day-to-day collaboration a far more fluid system is required.

 

Summary

There are some systems out there that facilitate sharing and collaborating, Synetec have their own, but the purpose of this article is not try and promote a particular system, merely to share what some  key areas are that if focused on could improve productivity. After all, who wants to spend their life reading emails?


George Toursoulopoulos is a technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke software solutions.

Thursday 23 October 2014

Case Study: Investment Manager Renovates Legacy System


Industry: Financial Services

Introduction: This Investment Manager was frustrated with an old legacy system they had in place, it was not aligned with their business processes and making changes to it in its current form was proving expensive and taking too much time. Team members were wasting time finding workarounds to reconcile the way the business was functioning in the present versus how it was in the past. They wanted a system that would be adaptable in the future, delivered for a reasonable price, within a guaranteed time frame and that wasn't going to impact other projects nor day-to-day operations.

Challenges:
• The business had progressed and the processes had been improved over time, but the old legacy system functionality was not reflecting that. This made the improvements less impactful and more costly to try and retrofit into the old way of doing things that the legacy system had been built around
• As the business was continuing to evolve any small changes that were needed to improve productivity and efficiency were costly, the business needed the system to be more responsive to new requirements going forward
• Many of the key business systems that were required for the business to function relied completely on a few people, which significantly increased the key-man dependency
• Naturally the amount of data had increased over time, but this had a negative impact on the performance of the old legacy system
• Technology that the legacy system was based upon was outdated which made support, maintenance and enhancement expensive and sometimes near impossible

Objectives:
• The project needed to be delivered in a guaranteed time frame in order to meet certain high profile business milestones
• The client had a perfectly capable development team in place, however with other projects and the responsibilities involved with day-to-day operations, they needed a short-term increase to their development capacity in order to get this project delivered, without increasing the head count in the long term
• New business functionality required and likely future requirements had to be quicker and cheaper to implement going forward
• Improve the stability and performance of the system
• Leverage the new functionality and capabilities of updated technologies
• Reduce key-man dependency through use of additional resources and improved documentation

Solution:
The Investment Manager partnered with Synetec in order to deliver a system that retained all the necessary features of the old legacy system while implementing enhancements that team members needed to improve productivity. Not using internal developers allowed the potential risks of systems development to be mitigated and ensured that their business as usual activities were not impacted. The agile renovated system is now far more adaptable to future requirements, which would prevent hesitation in potential process change and reduce expenses going forward.

Benefits:
Forecasts indicate that the maintenance of the system in the future should be reduced by an estimated 20%. The internal developers gained through knowledge transfer from the Synetec development team and are in a better position to incorporate best practises into other projects. System performance has improved by approximately 150% in some areas where users could search and manipulate large data sets, this has been attributed to the redesign of the underlying architecture and the improved performance of the current technology frameworks.

 “Through moving to recent Microsoft technologies we have not only improved performance of the system, but have access to new capabilities that will make delivery of future requirements far more cost effective. Mixing internal and near-sourced developers also proved extremely beneficial for business risk as well as knowledge transfer and I am sure those benefits will be reaped for a long time to come.”

 – Gregory, Technical Director


George Toursoulopoulos is a technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke software solutions.

Monday 13 October 2014

Criteria for Successful Software Vendor Selection


The success of selecting a vendor really comes down to one thing: can they deliver? As you seek a vendor keep that in the forefront of your mind at all times.  Any cost savings or operational efficiencies you generate can be quickly negated if your organisation can't get what it really needs on time. Rarely as straightforward as it sounds. The points below are what I categorise as key in terms of what you need to consider when selecting a software vendor?

Good Fit

The software needs to not only meet your key business requirements, but to be easily configurable to work within your environment. It has to be a good fit and that needs to be recognised from the beginning. Depending on the scenario, integration can take anywhere up to 50% of total project time (have often seen instances of longer than this). With that in mind, numerous real-life examples of the software working in a similar environment is extremely helpful and reassuring.

 

Quality and Expertise

That is where the expertise comes in. If the vendor has an experienced team they would have performed similar implementations numerous times in similar environments and they will be in a position to avoid pitfalls and make the process run as smoothly as possible. The quality of the team does not only relate to their technical ability. If the vendor is treating their team well, they will tend to retain them for longer which improves their experience and specifically from your perspective ensures that you are dealing with the same up-to-speed team members. How much wasted time and frustration results from having to explain your requirements or issues to different individuals who are being exposed to them or you for the first time.

Location, Language and Culture

As you seek the vendor for your organization, it’s vital to keep in mind the location, language and culture of their workforce - and more to the point, how it aligns with your organization. Communication issues encompass both language and culture and when dealing with communication of requirements, life is made a great deal easier when requirements that aren't thrashed out to the most finite detail, are grasped implicitly. Cultural sensitivities aside, the best vendor for your needs will have a team that can easily communicate and work with your organization and that usually entails an onshore workforce.



George Toursoulopoulos is a technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke software solutions.

Wednesday 17 September 2014

Top 3 Tips to Effective SQL Server Monitoring


Introduction

In a discussion with an infrastructure administrator at one of our clients I was asked how they could become more pro-active about monitoring their SQL Server environment. I think this is a good question because although almost all companies are monitoring whether their SQL Server environment is operational, not all of them are monitoring the key elements that indicate whether the environment might not be operational shortly or to dramatize, indicate imminent disaster.
Effective SQL Server monitoring has to include the ability to forecast any resource issues, monitor the individual environments and being informed in an appropriate response time of any potential issues. Below are some of the key elements of that conversation in an article.

 

Get the right tools

Having a monitoring tool is the first important point to make, I don't really mind whether its bought off-the-shelf or custom-built, but it needs to be configurable, robust and allow the easy monitoring of the key elements without the need for loads of work every time a SQL Server database is a added to the environment. Always use a separate environment for the monitoring tool. Both the server and database need to be separate, it seems obvious, but if your server has failed and your monitoring service is on that server, it doesn't help much. Less importantly, but something to be kept in mind is to choose your monitoring intervals so that they are frequent enough to give you an appropriate response time, but not so frequent as to affect your servers performance. Routinely test your monitoring tool and also the impact the monitoring is having on your server's performance, especially after the addition of a new server or database to your environment. On that note any new servers in your environment should be automatically added to the monitoring tool.

Devil is in the documentation

Documentation is an ugly word to most, but a necessary evil. In order to effectively monitor your SQL Server environment you need to know database and server configurations, any changes to these, how they deviate from the normal default values and the company standards. In summary we need to monitor server properties, database properties, instance configuration, database configuration and security. This data should be collected across all servers and then stored in a central repository and the collection should be automated.

Watch this space

So we have our monitoring tool, but what do we need to be sure we are watching? Backups is the first and most obvious port of call. Are the various database backups running, that includes the full backups, any differential backups and the log backups. Monitoring of the storage location for availability is also recommended. Next would be the SQL Agent jobs and monitoring whether any failed, did any take abnormally long and any jobs that failed to run. A potentially crucial difference exists between jobs that don't run and jobs that run, but fail. Speaking of space, monitoring of database file sizes is essential. You should monitor not only certain set milestones (i.e. 80% ), but also the rate of growth over time, which is vital in planning for extra disk-space.
Next, we would list memory, CPU and I/O, and again these would need to be tracked over time. These metrics, looked at as a collection of information, especially over time can help diagnose any problem or potential problem areas. Then lastly, we would definitely want to monitor any specific issues such as deadlocks, extended blocked processes, long running queries, index issues, high severity errors and timeouts.

 

Conclusion

The main objective is to enable you to identify and respond to problems as quickly as possible, ideally before they even occur. This strategy will enable you to find the problem areas and improve the performance of the servers and the applications that use them over time. I have seen many instances, if you will excuse the pun, of organisations with properly configured monitoring tools in place identify and fix SQL Server issues before users are even aware of them.


George Toursoulopoulos is a technology specialist and Director at Synetec, one of the UK’s leading providers of software services and solutions.

Tuesday 26 August 2014

Case Study: Hedge Fund implements new information management system


Case Study

Hedge Fund implements new information management system


Industry: Financial Services

Objective: Provide a system which consolidates all internal sources of data, specifically research and emails

IT Objectives:
• Deliver automated collation and indexing of all trade, research and email related information across systems and formats
• Implement a user-friendly and reliable system that enables authorised company employees to locate enriched information
• Reduce cost and time for access to company information
• Reduce load and increase performance of Exchange server

Business Objectives:
• Enable employees to locate and share related information from various sources
• Increase insight into access and use of internal information
• Remove the need for employees to use their mailboxes as information stores

Introduction:
This Hedge Fund was looking to improve accessibility to information from various systems and across formats to employees, while reducing the usage of Exchange mailboxes for storing of data. All documents were being stored on a network drive, with access and changes not being audited. Furthermore, related information on different mediums (particularly email), was effectively inaccessible by other employees and in practice lost once the employee left the organisation.

Challenge:
Most organisations, and particularly Financial Service Institutions, have user mailbox sizes in gigabytes. These emails have important information that could be useful to other team members and are inaccessible. There is an expense related to this, in time and productivity, while it also forces the business to spend large sums of money keeping their Exchange server responsive. When the employee moves on, that information effectively is lost, while it may reside on some offsite backup, it is effectively hidden and not easily referenced.
Key processes related information is stored on a variety of mediums and formats, so for example research can exist in the shape of a PDF on the network drive, a excel spreadsheet with calculations and emails containing relevant information. The challenge is to make that data accessible in a central location, to enable team members to effectively find and utilise that information regardless of where it originated from or the location it resided.

Solution:
As the manner of communication and sharing of data continues to evolve, so must the methods and systems that are used in order to manage that valuable information. This forward-thinking hedge fund realise that in order to improve productivity and utilise their valuable information to maintain their  competitive edge they needed to implement an enriched information management system.

Benefits:
Roll-out of the system has been completed ahead of schedule and on budget, users have taken to it with surprising ease, information has been enriched to provide additional value and email server storage has been reduced by just under a terabyte of data, which has improved the performance of outlook.

Client Quote:
“The amount of usable information seems to be pouring out of nooks and crannies with the sharing of this information taken to a new level, the major advantage from our perspective is the continuity of the data and the traceability of who is doing what with our valuable and sensitive data."

 – Miles, CTO

Tuesday 5 August 2014

3 Tips for Choosing a Cloud Hosting Provider

3 Tips for Choosing a Cloud Hosting Provider



Introduction
Cloud computing has transformed the IT landscape, public cloud offerings can help businesses reduce costs and increase business agility. These cloud services offer enormous economic benefits but they also pose significant potential risks for enterprises that must protect corporate data while complying with industry and government regulations. The purpose of this article to help enterprises make pragmatic decisions about specific issues that should be identified before selecting a hosting provider.

Data Centre
To me this is the biggest issue, the industry leaders will have their own state-of-the-art data centres and they will be geographically spread out to help reduce risk. That is not to say that there aren't some excellent vendors out there that use other data centres, yet still provide a world class support service, but as a rule of thumb I prefer vendors to have their own data centres and a few of them. I would also visit the data centres before making a decision and see how secure are they, what is your general feel about the place.

Outages
The proof of the pudding is in the eating. All vendors like to claim 99.9 % of uptime, but 80% of stats are made up. Joking. The truth though is often different because everyone naturally has selection criteria of what counts as an outage, completely understandable, so speak to existing clients and see what they say about how many times the vendor hosted systems were down over the previous year and for how long.

Platform
Most vendors that we have dealt with run on VMware, but there are some that use other systems, whether mainstream or proprietary, it is important to know which virtualisation software they use. Again, call me boring, but I prefer VMware, not only is it tried and tested, but should you ever want to leave, it's a relatively simple process. Should you choose a vendor with different virtualisation software to the vendor you are moving to it will be no simple matter.

George Toursoulopoulos is a financial technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke financial services software solutions. 

Tuesday 8 July 2014

Case Study: Asset Manager rolls out new compliance investigation system

Industry
Financial Services

Objective
Provide a platform to reduce costs associated with managing external and internal compliance investigations

IT Objectives
  • Introduce a systematic methodology for providing a reliable and cost-effective compliance information repository
  • Deliver automated collation and indexing of all compliance related information across systems and formats
  • Reduce cost and time for access to company and system wide compliance related information

Business Objectives
  • Offer early detection, minimising risk and enhancing information quality
  • Embrace diversity of compliance related information in a cohesive system, supporting future changes
  • Accelerate investigation response times across the enterprise, increasing efficiency and adding business value
Introduction:
This Asset Manager was looking to reduce costs and timeframes associated with managing compliance investigations. Their Compliance Officers were spending valuable time gathering and sifting through large data sets to find what they required in order to deal with FCA, Exchange and internally initiated investigations.


Challenge:
Financial Service Institutions worldwide regularly receive and have to respond to a variety of compliance investigations as they strive to enhance information quality and increase agility while simultaneously reducing costs and mitigating risk. Compliance officers have to spend an increasing amount of time gathering and sifting through large data sets to find what they require, often having to rely on already busy IT resources in order to locate the relevant information, instead of being able to focus on keeping the organisation compliant. With regulatory pressure to demonstrate due care, forward-looking companies such as this want to provide another way to illustrate their commitment and diligence to meeting regulatory obligations.


Solution:
As companies have dramatically increasing amounts of information to manage across growingly  diverse formats a system has to be 'aware' of other applications and frameworks within the organisation, only then can the relevant information be indexed, enriched and made available in an organic and company-wide manner. This progressive asset manager realised that in order to both reduce costs and improve responsiveness, both in quality and time, they needed to adopt a compliance investigation system.


Benefits:
Roll-out of the system is complete and response time to compliance queries has been reduced by 30% on average, costs have been reduced by 40% on average. These figures are based upon initial investigations and are likely to improve over time.


Client Quote: “Although introducing a new system can come with technological challenges, the primary challenge is usually cultural. However our Compliance Officers have embraced the system that has essentially empowered them. They now have more time to provide insight into the underlying information and are already more comfortable in knowing that all of the information is in front of them. We also all appreciate how easy the system is to use, which fast-tracked the training process.”
 – Simon, Group Compliance Officer


For more information regarding our software products please visit Synetec Compliance Investigation System

George Toursoulopoulos is a financial technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke financial services software solutions.

Thursday 26 June 2014

3 Tips for Speeding up Systems Delivery

3 Tips for Speeding up Systems Delivery



The development of complex software involves significant challenges. To prosper companies must balance engineering and technical excellence with increasingly challenging business objectives. What does that mean in real terms?

1.       Managing costs
2.       Managing development time
3.       Meeting requirements
4.       Controlling quality
5.       Managing change
6.       Complying with any applicable regulations

Traditional systems development must adapt to effectively meet these goals because it focuses too strongly on its various functions in isolation, as opposed to a holistic view of the system. The experience of riding a bicycle cannot be understood by looking at the how the gear system works or the function of the brakes as separate parts.


Requirements Management

Gathering of requirements should take place from the 'horse's mouth' whenever possible, that often involves busy high-profile users, but if you want them to be happy first-time around, then get a little of their time and use it well. Analysts need to understand what the users want and share that in its entirety with the development team. Going to the functional specification level immediately often results in information being lost and the developers don't start with the best chance of making the right decisions along the way (this happens to be one of the biggest challenges when outsourcing offshore). Once the requirements have been gathered these need to be effectively shared, so everyone between the end-user and the quality assurance team have access to them. Simple concept with powerful results! Engineers can refer to them for the little decisions that need to be made throughout the life-cycle and QA can go 'beyond' testing from a specification perspective.


Delivery Management

Agile methodology has gone a long way to improve the likelihood of synchronising deliverables with expectation, however we have found that the using a hybrid of Agile and Waterfall concepts produce the best results. Every environment is difference, so find the right recipe for that 'perfect bake' as Mary Berry would say! With cloud-based systems rollout has become simpler, but only if it's considered from the very beginning. Architectural decisions should be influenced by constraints and nuances of the infrastructure, as opposed to being seen as a last stop.


Communications Management

Regular and open communication builds trust, performance and is essential in empowering team members to meet and exceed expectations. Accountability is essential! In larger organisations this is more challenging, but the more team members don't shy away from sometimes difficult conversations, and the more the key information is available to all project team members, the better the performance will get. Technical jargon is often used to conceal misunderstanding or worse, poor performance, so it is well worth the time to decrypt these so that they can be truly shared.

George Toursoulopoulos is a financial technology specialist and Director at Synetec, one of the UK’s leading providers of bespoke financial services software solutions. 

Thursday 27 February 2014

Synetec looking to hire a talented .Net Developer

Synetec is a growing software development firm aimed primarily at the financial sector. We cover most areas of .net software development. In addition we also provide support for new and legacy systems to a number of clients.
 
Synetec is now expanding and we are looking for a new developer to join and as a valued member of the team, they will encourage, train and support you in keeping you up to date with the latest technologies.

Salary would depend on experience and be in line with market rates. 

Mandatory
VB.Net or C Sharp .Net - 5 years commercial experience
MS SQL Server
Unit Testing

Optional
WPF
Microsoft Development Certifications
.Net Web-based development

Applicants must register their interest and email through their CV's to careers@synetec.co.uk

NO AGENTS PLEASE!

Additional Information
This is for a permanent position working for a software development firm in Central London, based near Waterloo. All candidates must have a minimum of 5 years industry related UK work experience and the necessary passport or visas to work indefinitely within the UK.