by Jerome Kehrli
Posted on Tuesday Jan 18, 2022 at 12:11PM in Banking
The Podcast is available here and can be listened to directly from hereunder:
Happy listening !
The full transcript is available hereunderRead More
by Jerome Kehrli
Posted on Monday Dec 13, 2021 at 12:04PM in Big Data
For forty years we have been building Information Systems in corporations in the same way, with the same architecture, with very little innovations and changes in paradigms:
- On one side the Operational Information System which sustains day-to-day operations and business activities. On the Operational Information Systems, the 3-tiers architecture and the relational database model (RDBMS - Relational Database Management System / SQL) have ruled for nearly 40 years.
- On the other side the Decision Support Information System - or Business Intelligence or Analytical Information System - where the Data Warehouse architecture pattern has ruled for 30 years.
Of course the technologies involved in building these systems have evolved in all these decades, in the 80s COBOL on IBM hosts used to rule the Information Systems world whereas Java emerged quickly as a standard in the 2000s, etc.
But while the technologies used in building these information systems evolved fast, their architecture in the other hand, the way we design and build them, didn't change at all. The relational model ruled for 40 years along the 3-tiers model in the Operational world and in the analytical world, the Data Warehouse pattern was the only way to go for decades.
The relational model is interesting and has been helpful for many decades. its fundamental objective is to optimize storage space by ensuring an entity is stored only once (3rd normal form / normalization). It comes from a time when storage was very expensive.
But then, by imposing normalization and ACID transactions, it prevents horizontal scalability by design. An Oracle database for instance is designed to run on a single machine, it simply can't implement relational references and ACID transactions on a cluster of nodes.
Today storage is everything but expensive but Information Systems still have to deal with RDBMS limitations mostly because ... that's the only way we used to know.
On the Decision Support Information System (BI / Analytical System), the situation is even worst. in Data warehouses, data is pushed along the way and transformed, one step at a time, first in a staging database, then in the Data Warehouse Database and finally in Data Marts, highly specialized towards specific use cases.
For a long time we didn't have much of a choice since implementing such analytics in a pull way (data lake pattern) was impossible, we simply didn't have the proper technology. The only way to support high volumes of data was to push daily increments through these complex transformation steps every night, when the workload on the system is lower.
The problem with this push approach is that it's utmost inflexible. One can't change his mind along the way and quickly come up with a new type of data. Working with daily increments would require waiting 6 months to have a 6 months history. Not to mention that the whole process is amazingly costly to develop, maintain and operate.
So for a long time, RDBMSes and Data Warehouses were all we had.
It took the Internet revolution and the web giants facing limits of these traditional architectures for finally something different to be considered. The Big Data revolution has been the cornerstone of all the evolutions in Information System architecture we have been witnessing over the last 15 years.
The latest evolution in this software architecture evolution (or revolution) would be micro-services, where finally all the benefits that were originally really fit to the analytical information system evolution finally end up overflowing to the operational information system.
Where Big Data was originally a lot about scaling the computing along with the data topology - bringing the code to where the data is (data tier revolution) - we're today scaling everything, from individual components requiring heavy processing to message queues, etc.
In this article, I would want to present and discuss how Information System architectures evolved from the universal 3 tiers (operational) / Data Warehouse (analytical) approach to the Micro-services architecture, covering Hadoop, NoSQL, Data Lakes, Lambda architecture, etc. and introducing all the fundamental concepts along the way.Read More
by Jerome Kehrli
Posted on Friday Sep 03, 2021 at 11:17AM in Big Data
(Article initially published on NetGuardians' blog)
NetGuardians overcomes the problems of analyzing billions of pieces of data in real time with a unique combination of technologies to offer unbeatable fraud detection and efficient transaction monitoring without undermining the customer experience or the operational efficiency and security in an enterprise-ready solution.
When it comes to data analytics, the more data the better, right? Not so fast. That’s only true if you can crunch that data in a timely and cost-effective way.
This is the problem facing banks looking to Big Data technology to help them spot and stop fraudulent and/or non-compliant transactions. With a window of no more than a hundredth of a millisecond to assess a transaction and assign a risk score, banks need accurate and robust real-time analytics delivered at an affordable price. Furthermore, they need a scalable system that can score not one but many thousands of transactions within a few seconds and grow with the bank as the industry moves to real-time processing.
AML transaction monitoring might be simple on paper but making it effective and ensuring it doesn’t become a drag on operations has been a big ask. Using artificial intelligence to post-process and analyze alerts as they are thrown up is a game-changing paradigm, delivering a significant reduction in the operational cost of analyzing those alerts. But accurate fraud risk scoring is a much harder game. Some fraud mitigation solutions based on rules engines focus on what the fraudsters do, which entails an endless game of cat and mouse, staying up to date with their latest scams. By definition, this leaves the bank at least one step behind.
At NetGuardians, rather than try to keep up with the fraudsters, we focus on what we know and what changes very little – customers’ behavior and that of bank staff. By learning “normal” behavior, such as typical time of transaction, size, beneficiary, location, device, trades, etc., for each customer and internal user, and comparing each new transaction or activity against those of the past, we can give every transaction a risk score.Read More
by Jerome Kehrli
Posted on Friday Jun 11, 2021 at 10:25AM in Agile
In my current company, we embrace agility down the line, from the Product Management Processes and approaches down to the Software Development culture.
However, from the early days and due to the nature of our activities, we understood that we had two quite opposed objectives: on one side the need to be very flexible and change quickly priorities as we refine our understanding of our market and on the other side, the need to respect commitment taken with our customers regarding functional gaps delivery due dates.
In terms of road-mapping and forecasting, these two challenges are really entirely opposed:
- Strong delivery due dates on project gaps with hard commitment on planning. Our sales processes and customer delivery projects are all but Agile. We know when in the future we will start any given delivery project and we know precisely when the production rollout is scheduled, sometimes up to 12 months in advance. We have most of the tine a small set of Project Gaps required for these projects. Since we need to provide the delivery team with these functional gaps a few weeks prior to the production rollout, it turns out that we have actually strong delivery due dates for them, sometimes 12 months in advance.
- Priorities changing all the time as our sales processes and market understanding progress. We are an agile company and mid-term and even sometimes short-term focus changes very frequently as we sign deals and refine our understanding of our market, not to mention that the market itself evolves very fast
These two opposed challenges are pretty common in companies that are refining their understanding of their Product-Market Fit. Having to commit strongly on sometimes heavy developments up to more than a year in advance, while at the same time changing the mid-term and short-term priorities very often is fairly common.
In this article, I would like to propose a framework for managing such a common situation by leveraging on a roadmap as a communication, synchronization and management tool by inspiring from what we do in my current company (leveraging on some elements brought by Mr. Roy Belchamber - whom I take the opportunity to salute here).
There are three fundamental cultural principles and practices that are absolutely crucial in our context to handle such opposed objectives. These three elements are as follows:
- Multiple interchangeable development teams: multiple teams that have to be interchangeable are required to be able to spread the development effort among flexible evolutions - that can be reprioritized at will - and hard commitments - that need to be considered frozen and with a fixed delivery due date.
- Independent and autonomous development teams: these development team need to be able to work entirely independently and without and friction from any other team. This is essential to have reliable estimations and forecasts. A lot of the corollary principles and practices I will be presenting in this article are required towards this very objective.
- An up to date and outcome-based Roadmap. Having a roadmap that crystallizes the path and the foreseen development activities in the next 2 years is absolutely key. Such a roadmap is all of an internal communication tool, an external communication support, a management and planning tool..
In this article, I intend to present the fundamental principles behind the design and maintenance of such a roadmap that are required to make it a powerful and reliable tool - and not yet another good looking but useless drawing - along with everything that is required in terms of Agile principles practices.Read More
by Jerome Kehrli
Posted on Tuesday Jun 08, 2021 at 09:18AM in Banking
(Article initially published on NetGuardians' blog)
Whenever our software is run head-to-head in a pitch situation against that of our rivals, we always come out top. We always find more fraud with a lower number of alerts. For some, this is a surprise – after all, we are one of the youngest companies in our field and one of the smallest. To us, it is no surprise. It is testament to our superior analytics.
A focus on customer behavior
We began working in fraud prevention in 2013 and quickly realized the futility of rules engines in this endless game of cat-and-mouse with the fraudsters. The criminals will always tweak and reinvent their scams; those trying to stop the fraud with rules engines will always be left desperately working as fast as possible to identify and incorporate the latest scams into their surveillance. Far better to focus on what we know changes very little – customer behavior.
If a bank knows how a customer spends money, it can spot when something is awry by looking for anomalies in transaction data. However meticulous the fraudster is at trying to hide, every fraudulent transaction will have anomalous characteristics. People’s lives are constantly changing – they buy from new suppliers, they move house, go on holiday and their children grow up – all of which will affect their spending and transaction data. Every change will throw up false alerts that will undermine the customer experience unless you train your models correctly.
The three pillars of 3D AI
We train our models using what we call our 3D AI approach. This enables them to assess the risk associated with any transaction with extraordinary accuracy, even if it involves new behavior by the customer. This also keeps false alerts to the minimum.
Developed by us at NetGuardians, this approach has three pillars, each of which uses artificial intelligence (AI) to constantly update and hone the models.
The pillars are: anomaly detection, fraud-recognition training analytics and adaptive feedback. Together, they give our software a very real advantage by not only spotting fraud and helping banks stop fraudulent payments before any money has left the account, but also by minimizing friction and giving the best possible customer experience. This is what differentiates our software in head-to-head pitches.Read More
by Jerome Kehrli
Posted on Tuesday May 11, 2021 at 04:57PM in Geeks-up !
Well. For once this will be an article far away from the kind of stuff I use to post on this blog. This is a post about COVID-19 and its vaccines. But it's perhaps the most important thing I will have ever written.
Since the start of the week, various events have cruelly reminded us of how dangerous COVID is.
Particularly the situation in India, where infections are increasing at an absolutely terrifying rate, is very worrying. India this week set a world record for new daily COVID cases with more than 400,000 daily infections in recent days (and this is probably underestimated) and more than 2,000 daily deaths.
It is time for everyone to realize that our only way out of this long-term humanitarian disaster is clearly through herd immunity.
We have finally had an incredible chance for a few months: in Europe and the US we have wide access to many vaccines. The speed of vaccination has increased. And that is great.
The reason that prompts me to write today is that the #AntiVax are now emerging as the main pitfall on the way out of this disaster and that we - scientists or simply educated people - have the responsibility to react. All over Western countries, the same signs are being seen: some vaccination centers have already gone from insufficient supply a few weeks ago to insufficient demand today. And that is a disaster.
- For example, in the US, according to the CDC, the average daily immunization has dropped by 20% since the beginning of April. A lot of centers find themselves with excess doses and possible appointments remaining, for example in South Carolina (see US media). Another example, without the county of Palm Beach, where a large number of vaccination centers have opened in recent weeks, the health department announces that it has 10,000 possible appointments on the various sites that remain vacant.
- In Europe, the situation is unfortunately no better. In France, for instance, several vaccination centers claim that people are not registering for the vaccine as quickly as they had expected and that others are not showing up. The bottom line is that multiple doses are not given at the end of the day and some must be thrown away. This is madness.
A worrying and growing number of people are reluctant to get the free COVID vaccine.
However, these vaccines could save not only your life, but also the life of the people around you.
The bottom line is that for the Coronavirus, the herd immunity threshold would be between 70 and 90% of the population. It is this threshold that we have to reach as quickly as possible. This must be our most sacred goal.
In the US, for example, a survey found that if 60% of American adults have either received a first dose of the vaccine or wish to be vaccinated, 18% respond "maybe" and 22% categorically refuse.
Let us imagine that in the long term the undecided ones convince themselves to get vaccinated, that only places us at 78%. And unfortunately, this poll does not take into account children who are not currently eligibale to bgetting vaccinated and who represent about 22% of the population. From this perspective, achieving herd immunity today seems illusory.
We must therefore ensure that the vaccine is given to as many adults as possible as quickly as possible. And that means we really all need to get vaccinated.
I can understand that young, athletic and healthy people do not see the benefits of vaccination. We might not get seriously ill from COVID or even get sick at all, right?
But it could always be inadvertently passed on to someone who could then die.
And only vaccinating people at risk is not satisfactory since the circulation of the virus would not be stopped. And this is the real problem: the more the virus circulates, the more likely it is that we will see mutations appear that make it more dangerous, and perhaps even strains capable of fully resisting current vaccines, thus bringing us back to The starting point.
The point is, the current hesitation over vaccination is terribly problematic.
So in the rest of this article, I would like to review and discuss most of the "arguments" of the #Antivax groups.Read More
by Jerome Kehrli
Posted on Monday Aug 17, 2020 at 10:31AM in Agile
The Search for Product-Market Fit is the sinews of war in a startup company. While the concept and whereabouts are well known by most founders, the importance of this event in the company and product building process, what it means to be Before-Product-Market-Fit and After-Product-Market-Fit and the fundamental differences in terms of objectives, processes, culture, activities, etc. between these two very distinct states is almost always underestimated or misunderstood.
Product-Market Fit is this sweet spot that startups reach when they feel eventually that the market is really pulling out their product. It's what happens when they found out that they can only deliver as fast as the customers buys their product or when they can only add new servers in the SaaS cloud as fast as is required to sustain the rise in workload.
Product-Market Fit is so important because it has to be a turn point in the life of a young company.
- Pre-Product-Market Fit, the startups needs to focus on the leanest possible ways to solve Problem-Solution Fit, define and verify its business model and eventually reach Product-Market-Fit.
- Post-Product-Market Fit, the company becomes a scale up, needs to ramp up its marketing roadmap and effort, build and scale it's sales team, build mission-centric departments, hire new roles and recruit new competencies, etc.
Dan Olsen designed the following pyramid to help figure what Product-Market Fit means (we'll be discussing this in length in this article):
Understanding Product-Market Fit and being able to measure and understand whether it's reached or not is crucial. Reaching PMF should be the core focus of a startup in its search phase and understanding whether it's reached is key before scaling up.
This article is an in-depth overview of what Product-Market-Fit means and the various perspective regarding how to get there. We will present the Lean-Startup fundamentals required to understand the process and the tools to reach product market fit, along with the Design thinking fundamentals, the metrics required to measure it, etc.
TDD - Test Driven Development - is first and foremost a way to reduce the TCO of Software Development
by Jerome Kehrli
Posted on Saturday Jan 18, 2020 at 11:23PM in Agile
Test Driven Development is a development practice from eXtreme Programming which combines test-first development where you write a test before you write just enough production code to fulfill that test and refactoring.
TDD aims to improve the productivity and quality of software development. It consists in jointly building the software and its suite of non-regression tests.
The principle of TDD is as follows:
- write a failing test,
- write code for the test to work,
- refactor the written code,
and start all over again.
Instead of writing functional code first and then the testing code afterwards (if one writes it at all), one instead writes the test code before the functional code.
In addition, one does so in tiny small steps - write one single test and a small bit of corresponding functional code at a time. A programmer taking a TDD approach shall refuse to write a new function until there is first a test that fails - or even doesn't compile - because that function isn't present. In fact, one shall refuse to add even a single line of code until a test exists for it. Once the test is in place one then does the work required to ensure that the test suite now passes (the new code may break several existing tests as well as the new one).
This sounds simple in principle, but when one is first learning to take a TDD approach, it does definitely require great discipline because it's easy to "slip" and write functional code without first writing or extending a new test.
In theory, the method requires the involvement of two different developers, one writing the tests, then other one writing the code. This avoids subjectivity issues. Kent Beck has more than a lot of examples of why and how TDD and pair programming fit eXtremely well together.
Now in practice, most of the time one single developer tends to write tests and the corresponding code all alone by himself which enforces the integrity of a new functionalities in a largely collaborative project.
There are multiple perspective in considering what is actually TDD.
For some it's about specification and not validation. In other words, it's one way to think through the requirements or design before one writes the functional code (implying that TDD is both an important agile requirements and an agile design technique). These considers that TDD is first and foremost a design technique.
Another view is that TDD is a programming technique streamlining the development process.
TDD is sometimes perceived as a way to improve quality of software deliverables, sometimes as a way to achieve better design and sometimes many other things.
I myself believe that TDD is all of this but most importantly a way to significantly reduce the "Total Cost of Ownership (TCO)" of software development projects, especially when long-term maintenance and evolution is to be considered.
The Total Cost of Ownership (TCO) of enterprise software is the sum of all direct and indirect costs incurred by that software, where the development, for in-house developped software, is obviously the biggest contributor. Understanding and forecasting the TCO and is a critical part of the Return on Investment (ROI) calculation.
This article is an in depth presentation of my views on TDD and an attempt to illustrate my perspective on why TDD is first and foremost a way to get control back on large Software Development Projects and significantly reduce their TCO.Read More
by Jerome Kehrli
Posted on Friday Dec 06, 2019 at 05:00PM in Banking
Yesterday we were amazed by the first smartphones. Today they have almost become an extension of ourselves.
People are now used to be connected all the time, with highly efficient devices on highly responsive services, everywhere and for every possible need.
This is a new industrial revolution - the digitization . and it forces corporations to transform their business models to meet customers on these new channels.
Banks worldwide are on the first line in this regards and for many years now they have well understood the urgency in proclaiming digitization as a key objective.
From a user perspective, the digitization confers enormous benefits in the form of ease, speed and multiple means of access and a paradigm shift in engagement. Since banking as a whole benefits from going digital, it is only a matter of time before operations turn completely digital.
The journey to digital transformation requires both strategy investments as well as tactical adjustments in orienting operations for the digital road ahead.
Fortunately, if technology can be perceived as a challenge, it is also a formidable opportunity.
And in this regards, Artificial Intelligence is a category on its own.
by Jerome Kehrli
Posted on Friday Apr 05, 2019 at 11:40AM in Banking
In my current company, we implement a state-of-the art banking Fraud Detection system using an Artificial Intelligence running on a Big Data Analytics platform. When working on preventing banking fraud, looking at SWIFT messages is extremely interesting. 98% of all cross-border (international) funds transfers are indeed transferred using the SWIFT Network.
The SWIFT network enables financial institutions worldwide to send and receive information about financial transactions in a secure, standardized and reliable environment. Many different kind of information can be transferred between banking institution using the SWIFT network.
In this article, I intend to dissect the key SWIFT Messages Types involved in funds transfers, present examples of such messages along with use cases and detail the most essential attributes of these payments.
These key messages are as follows:
- MT 101 - Request for Transfer
- MT 103 - Single Customer Credit Transfer
- MT 202 - General Financial Institution Transfer
- MT 202 COV - General Financial Institution Transfer for Cover payments
This article presents each and every of these messages, discuss their typical use cases and details key SWIFT fields involved.Read More
by Jerome Kehrli
Posted on Monday Feb 18, 2019 at 08:42AM in Computer Science
The world of fraud prevention in banking institutions has always been largely based on rules.
Bankers and their engineers were integrating rules engines on the banking information system to prevent or detect most common fraud patterns.
And for quite a long time, this was sufficient.
But today we are experiencing a change of society, a new industrial revolution.
Today, following the first iPhone and the later mobile internet explosion, people are interconnected all the time, everywhere and for all kind of use.
This is the digital era and the digitization of means and behaviours forces corporations to transform their business model.
As a consequence, banking institutions are going massively online and digital first. Both the bank users and customers have evolved their behaviours with the new means offered by the digital era.
And the problem is:
How do you want to protect your customer's assets with rules at a time when, for instance, people connect to their swiss ebanking platform from New York to pay for a holiday house rental in Morocco? How would you want to define rules to detect frauds when there are almost as many different behaviours as there are customers?
by Jerome Kehrli
Posted on Monday Feb 04, 2019 at 12:10PM in Banking
The below is an extract from an interview I ran in February 2019 during the EPFL Forward event.
NetGuardians is a Swiss Software Publisher based in Yverdon-les-bains that edits a Big Data Analytics Solution deployed Financial Institution for one key use case: fighting financial crime and preventing banking Fraud.
Banking fraud is meant in the broad sense here: both internally and externally.
Internal fraud is when employees misappropriate funds under management and external fraud is when cyber-criminals compromise ebanking applications, mobile devices used for payment or credit cards.
In the digital age, the means of fraudsters and cyber-criminals have drastically increased.
Cyber-criminals have become industrialized, professionalized and organized. The same technology they use against banks is also what gives us the means to protect banks
At NetGuardians we deploy an Artificial Intelligence that monitors on a large scale, in depth and in real time all activities of users, employees of the bank, but also those of its customers, to detect anomalies.
We prevent bank fraud and fight financial crime by detecting and blocking all suspicious activity in real time.
Jérôme Kehrli, how did you manage to convince a sector that is, in essence, very traditional, to trust you with your digital tools to fight against fraud?
Two different worlds, two languages, two visions?
The situation of the banks is a bit peculiar, the digitization and with it the evolution of the means and the behaviours of the customers in the digital age, was at the same time both a traumatic and a formidable solution.
The digital revolution was a traumatic because the banks, which by their very nature are very conservative, especially in Switzerland with our very strong private banking culture, were not prepared for the need to profoundly transform the customer experience of the banking world: to meet the customer where he is, on his channels, with mobile banking, this culture of all and everything immediately, with instant payments, the opening of the information system, with the explosion of the External Asset Managers model and external service providers with the PSD2 European standard, etc.
The digital revolution has imposed these changes, sometimes brutally, in banks and it is the source of a tremendous increase of the attack surface of banks.
But this same technology that spawned the digital revolution has proved to be the solution too.
Technology has made it possible to build digital banking applications that provide all of the bank's services on a mobile device.
Technology has made it possible to implement innovative solutions that secure the information system and protect client funds.
And in this perspective, Artificial Intelligence is really a sort of panacea: robot advisory, chatbots, personalization of financial advice and especially, especially the fight against financial crime: banking fraud and money laundering
In the end, if five years ago our solutions seemed somewhat avant-garde, not to say futuristic and sometimes aroused a bit of skepticism, today the banks are aware of the digital urgency and it is the bankers themselves who eagerly seek our solutions.
You support the digital shift of the banking sector.
Do banks sometimes have to change their way of operating, their habits, to be able to use your technologies?
(Do you have to prepare them to work with you?)
So of course the digital revolution profoundly transforms not only the business model but also the corporate culture, its tools, and so on.
At NetGuardians we have a very concrete example.
Before the use of Artificial Intelligence, banks protected themselves with rules engines. Hundreds of rules were deployed on the information system to enforce security policies or detect the most obvious violations.
The advantage with rules was that a violation was very easy to understand. A violation of a compliance rule reported in a clear and accurate audit report was easy to understand and so was the response.
The disadvantage, however, was that the rules were a poor protection against financial crime and that's why fraud has exploded over the decade.
Today with artificial intelligence, the level of protection is excellent and without comparison with the era of the rules.
But the disadvantage of artificial intelligence is that accurately understanding a decision of the machine is much more difficult.
At NetGuardians, we develop with our algorithms a Forensic analysis application that allows bankers to understand the operation of the machine by presenting the context of the decision.
This forensic analysis application, which presents the results of our algorithms, is essential and almost as important as our algorithms themselves.
This is a powerful application but requires a grip.
Tom Cruise in Minority Report who handles a data discovery application playing an orchestra conductor, it's easy in Hollywood, but it's not in reality.
In reality, we provide initial training to our users and then regular updates.
In the end, a data analysis and forensic application is not Microsoft Word. Our success is to make such an application accessible to everyone, but not without a little help.
In conclusion i would say that the culture transformation end the evolution of the tools do require some training and special care.
In general, what should a company prepare for, before making a digital shift?
In the digital age, many companies must transform their business model or disappear. Some services become obsolete, some new necessities appear.
We can mention Uber of course but also NetFlix, Booking, eBookers, etc.
For the majority of the industrial base, the digitalization of products and services is an absolute necessity, a question of survival.
Successful process and business model transformation often requires a transformation of the very culture of the company, down toits identity:
Among other things one could mention the following requirements:
- scaling agility from product development to the whole company level
- involving digital natives to identify and design digital services
- realizing the urgency or if necessary create a sense of urgency
- understanding the scale of the challenge and the necessary transformation. Some say "if it does not hurt, it is not digital transformation"
In summary I would say that a company is "mature" for digitalization if it is inspired by the digitalization of our daily life to adapt its products and services AND if it has the ability to execute its ideas.
Ideas without the ability to execute leads to mess, the ability to execute without the ideas leads to the status quo.
From there I would say that a company must prepare itself on these two dimensions, bring itself the conditions and resources required to identify and to design its digital products and those required to realize them.
by Jerome Kehrli
Posted on Wednesday Jul 04, 2018 at 09:34PM in Banking
The digitalization with its changes of means and behaviours and the induced society and industrial evolution is putting increasingly more pressure on banks.
Just as if regulatory pressure and financial crisis weren't enough, banking institutions have realized that they need to transform the way they run their business to attract new customers and retain their existing ones.
I detailed already this very topic in a former article on this blog: The Digitalization - Challenge and opportunities for financial institutions.
In this regards, Artificial Intelligence provides tremendous opportunities and very interesting initiatives start to emerge in the big banking institutions.
In this article I intend to present these three ways along with a few examples and detail what we do at NetGuardians in this regards.Read More
by Jerome Kehrli
Posted on Friday Jun 29, 2018 at 04:29PM in Computer Science
In parallel and in addition to BeCurious, the Empowerment Foundation launches in 2018 a project of curation files thematic through the bee² program.
Taking up the practice of curating video content, bee² means: exploring the issues that build our world, expand the perspectives of analysis, stimulate awareness to enable everyone to act in a more enlightened and responsible way facing tomorrow's challenges.
It's about bringing out specific issues and allowing everyone to easily discover videos the most relevant, validated by experts, on the given topic without having to browse many sources of information.
The three videos I contributed to are (in french, sorry):
- AI and Cybersecurity: Preventing Bank Fraud
- How does the Google self driving car work ?
- What are the limits of AI?
The three videos can be viewed directly on this very page below.Read More
by Jerome Kehrli
Posted on Friday May 04, 2018 at 12:32PM in Big Data
The Lambda Architecture, first proposed by Nathan Marz, attempts to provide a combination of technologies that together provide the characteristics of a web-scale system that satisfies requirements for availability, maintainability, fault-tolerance and low-latency.
Quoting Wikipedia: "Lambda architecture is a data-processing architecture designed to handle massive quantities of data by taking advantage of both batch- and stream-processing methods.
This approach to architecture attempts to balance latency, throughput, and fault-tolerance by using batch processing to provide comprehensive and accurate views of batch data, while simultaneously using real-time stream processing to provide views of online data. The two view outputs may be joined before presentation.
The rise of lambda architecture is correlated with the growth of big data, real-time analytics, and the drive to mitigate the latencies of map-reduce."
In my current company - NetGuardians - we detect banking fraud using several techniques, among which real-time scoring of transactions to compute a risk score.
The deployment of Lambda Architecture has been a key evolution to help us evolve towards real-time scoring on the large scale.
In this article, I intend to present how we do Lambda Architecture in my company using Apache Kafka, ElasticSearch and Apache Spark with its extension Spark-Streaming, and what it brings to us.Read More