Possible hacking in the recharge of the public transport cards of Zaragoza: you paid 10 euros and you were recharged 50.

The local zaragozana press has dawned this Friday with headlines as forceful as the one that showed El Periodic de Aragon: “Massive fraud”.

We would be talking, according to the newspaper, of an “uncontrolled hacking” that is allowing, through a mobile application with NFC, the recharging of fraudulent transport tickets for the use of urban buses and the tram of Zaragoza undetectable by the inspectors who monitor the fraud.

Almost 400 cards would have been involved in the fraudulent reloading of balance to spend on public transport in Zaragoza.

Specifically, according to data collected by the City Council of Zaragoza, this fraud would involve 300 citizen cards, an identification for those registered in the Aragon capital that allows payment of different municipal services, and 69 Lazo cards, a card similar to the citizen and available for those not registered in the city.

The dimension of what happened is “incalculable”.

From the consistory zaragozano have explained that since late last week spread a rumor in the city, especially in the university area, about the offer to recharge the citizen card at 20% of the cost. If you wanted to top up 50 euros, for example, you only had to pay 10 euros.

The matter will be reported to the police after a meeting between Zaragoza City Council and Hiberus Technology, the company responsible for the operation of these contact fewer cards. It is not the first time that these rumors had spread throughout the city, explain since the Argonne edition of eldiario.es, and in the second half of last year were recorded at least 75, according to municipal sources cited by El Periodic de Aragon.

They would have recharged up to 200 euros in the same card

These media assures that both the municipality and the company responsible for the cards “have tried to minimize the scam” by revealing those possible 300 cards involved in the detected “incidences”, but that nearby sources maintain that the dimension of what happened is “incalculable”. In addition, according to El Periodic de Aragon, citing more sources, the two people responsible for the scam have been located, with a large group of young people being the “collaborators” and in charge of “spreading the word” and searching for “possible clients”.

The usual thing, explains the local media, was to charge 10 euros in exchange for recharging 50 euros of balance that could be spent on urban buses and the tram of Zaragoza. However, they claim that up to 200 euros had been injected into the same card in several operations. The application supposedly capable of carrying out the hacking could also be acquired for 300 euros.

If the recharge was valid, it should be recorded in one of the machines that performs them legally.

The only way to carry out the control on the loaded amounts is through the validations in the modes of public transport. When one of these fraudulently reloaded cards was used on a bus or tram, the increase in the balance over a previous validation is recorded. If the recharge was valid, it should be recorded in one of the machines that performs them legally. In addition, if the balance difference between validations is very significant, the system automatically creates an alert signal for the recharge to be checked.

Zaragoza City Council has explained that all cards with detected incidents do not necessarily have to be related to the possible fraud in the investigation. However, those that have been fraudulently charged, once the fact has been confirmed, will be cancelled.

What do we mean by artificial intelligence today and what will it encompass in 2025?

In 1950, an artificial intelligence was a machine capable of simulating human thought. It has taken us some time to realize that this task must be “cut up” into smaller ones. That is why today we are more modest, and we are satisfied with an AI successfully simulating a small percentage of ourselves. But what will become of artificial intelligence in 2025?

There is no doubt that, as it progresses – by leaps and bounds, as it seems – we will also change what we understand by it. Artificial intelligence currently helps us to optimize processes. Given its versatility, in the future we could frame it in many other areas.

The IA will be in charge of services that we don’t even think about.

In very general terms, technology related to the “thinking” of machines is related to delegating tasks. First, the calculation, then the predictive. It is possible that in 2025 we will leave everything else to them and allow them to make decisions for us if we discover that it benefits us or improves our quality of life.

It is very likely that at some point in the next decade we will delegate much of the work conversations to machines. Today technology already allows us to call a local for us and reserve a table (above). We will probably give the machines a certain degree of social solvency, “letting go of the belt” with which we now tie them.

As we saw at the time, consultants such as PwC and Gartner believe very likely that smartphones with artificial intelligence will become a digital “I” of the user, a sort of virtual butler in the style of Iron Man’s JARVIS. As machines become more intelligent, we can delegate more tasks. Also, more responsibility.

Making AI available to all

Statistically, we’re unlikely to know how to program a neural network. Most of us don’t have this knowledge, just as we don’t light a fire when we want to warm up. Instead, we turn on the heating. It seems that the near future of artificial intelligence will inexorably pass through its democratization in all directions, perhaps starting with employees.

What the AI will bring us in 2019

High-tech companies have a bottleneck in the development department. They are the ones who invent the future every day, but there is a lot of labor friction when trying to transmit day by day part of this knowledge to other verticals of the company. Meanwhile, artificial intelligence is gaining skills.

We will see the real jump of AI when it can be programmed in a simple way by an average user, in the same way that he is able to investigate a little and start creating tables, dynamic tables or macros of increasing difficulty in Excel. The day when these tools are accessible to the average population will begin “the revolution”. And it will probably be soon.

The revolution has not yet begun

A year ago, Michael Irwin Jordan, one of the world’s undisputed leaders in machine learning, wrote an article entitled ‘Artificial Intelligence – The Revolution Has Not Yet Happened’. A few years earlier, Tim Urban of Wait But Why wrote about the “artificial intelligence revolution” and the road to super intelligence. The graph describes the phenomenon quite well.

It depicts the “edge” of human progress. Despite our efforts, it will look linear once the potential of AI and the emerging systems, whose unpredictability we discuss later, have been unleashed. To understand scale, we bring a riddle that is used in classes to teach the speed with which these emerging systems manifest.

“You are inside a barrel with a drop at the bottom. The amount of water doubles every minute and in one hour the barrel will be full. In what minute will you drown?”

Our brain is not developed to think about disruptive changes, but there is no doubt that these will come at some point. Almost certainly by 2025.

Emerging systems cannot be predicted

Who could have predicted the explosion of knowledge after the printing press, the automaton of the scribes? Something similar happened with smartphones. One, isolated, is a very useful pocket computer, but the emerging system that changed the world a few years ago came from millions of them interacting with each other. Its consequences could not be accurately predicted.

Now that devices such as the Huawei P30 Pro already incorporate chips oriented to artificial intelligence, a new emergency is being cooked (in the sense of emergent) whose end we cannot imagine. We smell something, but fixing it on paper won’t make it more likely.

Let us take reflection, for example, to autonomous mobility. It is estimated that there are more than 330,000 professional drivers in Spain. In a decade, as far as we know, a third of a million people (0.7%) may have to look for a new job. Imagine the impact this has on other sectors.

Now let’s think that there are already machines that make hamburgers, analyze the stock market, translate, organize trips, generate summaries and even paint pictures. The two best chess players in the world are not peopled, they are artificial intelligence. Slowly, machines learn to do something, and once they do, they don’t forget it.

We cannot predict the future, and there is the paradox that the more data we have to try to analyze it, the more divergent the approximations seem. But we do know that there will be changes. Soon a new emerging system will emerge, perhaps educational or economic, and the key will be to understand what is happening and how to adapt to change.

A unitary AI made up of many specific ones

We opened this article by saying that in 1950 machines were going to be as intelligent as people. Alan Turing fantasized about making them indistinguishable from us. Along the way we learned that it was easier to create artificial intelligence for each specific problem. The question we can ask ourselves is: can we unite them into a larger AI?

Perhaps by 2025 we will use a succession of artificial intelligence coordinated with each other by natural language processing. Interconnected, from our limited human perspective we will seem to speak with a single assistant while, behind the scenes, hundreds or thousands of “small” AI will solve each demand.

According to all experts, we are at an early stage in the development of artificial intelligence. In a decade, the breakthroughs we are astonished at today will go completely unnoticed. We are living in an exciting time, and soon we will have to redefine what artificial intelligence is to us.

Pavel Durov, the co-founder of Telegram, explains why he believes “WhatsApp will never be safe”.

Telegram’s co-founder and chief executive, Pavel Durov, is very clear in one of his latest publications: “WhatsApp will never be safe. This forceful statement is part of a text in which the developer seeks to explain why, in his view, the messaging application in the hands of Facebook will remain open to surveillance.

“All their security problems are conveniently suited for surveillance and look a lot like backdoor.”

Durov takes as his starting point WhatsApp’s latest major security problem, claiming that “the world seems surprised,” to argue why he believes the application bought by Mark Zuckerberg a few years ago has numerous security problems that “look a lot like the back doors. The insinuations are clear.

“No wonder dictators like WhatsApp.”

Pavel Durov begins to sow doubts about WhatsApp’s safety from the beginning of his article. Before speculating about backdoor and the resemblance that some Facebook application security problems have to them, the founder of the Russian alternative makes things clear: “Every time WhatsApp has to correct a critical vulnerability in its application, a new one appears in its place.

The doubt about hypothetical backdoor reinforces it with a fact: “Unlike Telegram, WhatsApp is not open source, so a security researcher cannot easily check if there are backdoor in his code. And it goes further, because it claims not only that it does not publish it, but that it does exactly the opposite: it deliberately obfuscates it “to make sure that no one can study them in depth”.

“WhatsApp is not open source, so a security investigator can’t easily check for back doors.

However, Durov says that it is possible that these back doors are demanded of them from the FBI. And, he says, “it’s not easy to run a secure communication application from the United States. As he explains, a Telegram team spent a week in the land of the stars and stripes in 2016 and, during that time, recorded three attempts at infiltration by the FBI. “Imagine what 10 years in that environment can bring to a U.S.-based company,” he says.

The confuser of one of WhatsApp’s great rivals, a regular winner of its falls, recalls the anti-terrorist justification for the back doors of communication platforms and the obvious problem: that those back doors “can also be used by criminals and authoritarian governments. And he goes on to say, “No wonder dictators like WhatsApp.

“Their [WhatsApp’s] lack of security allows them to spy on their own people, so WhatsApp is still available for free in places like Russia or Iran, where authorities prohibit Telegram.

In fact, I started working at Telegram in direct response to personal pressure from the Russian authorities. At that time, in 2012, WhatsApp was still transferring plain text messages in transit. That was crazy. Not only governments or hackers, but also mobile phone providers and Wi-Fi administrators had access to all WhatsApp texts.

Pavel Durov goes on to explain the different encryption measures implemented by the famous messaging application, with “some encryption” first, and then end-to-end encryption “so that no one else can access the messages. The latter coincided with “an aggressive campaign” for users to back up their conversations. “WhatsApp did not inform its users that, when backing up, messages are no longer protected by end-to-end encryption and can be accessed by hackers and law enforcement,” he says.

According to his text, those who have not agreed to make backups as requested by the pop-ups that appeared in WhatsApp, “can be tracked with a series of tricks, from accessing the backups of their contacts to invisible changes in encryption keys. “Metadata generated by WhatsApp users is filtered to all types of agencies in large volumes by the parent company,” he says.

“Looking back, there hasn’t been a day in WhatsApp’s 10-year journey that this service has been secure.”

“WhatsApp has a consistent history, from zero encryption at its inception to a succession of security issues strangely suited for surveillance purposes. Looking back, there hasn’t been a day in WhatsApp’s 10-year journey that this service has been secure. That’s why I don’t think the simple update of WhatsApp’s mobile application makes it safe for everyone. For WhatsApp to become a privacy-oriented service, it must risk losing entire markets and colliding with the authorities in its home country. They don’t seem ready for that.

The head of Telegram also recalls that last year the founders of WhatsApp left the company “because of concern for users’ privacy” and says that he himself had to leave his country, Russia, after refusing “to comply with violations of the privacy of VK users authorized by the government. He explains that it was not pleasant, but he would gladly do it again.

“Telegram has had no data leaks or security problems.”

Pavel Durov states that Telegram has done a “bad job” in persuading WhatsApp users since, although they have attracted hundreds of millions of users in recent years, “many people cannot stop using WhatsApp because their friends and family are still using it” and “most Internet users are still hostages of the Facebook/WhatsApp/Instagram empire”.

At this point, Durov boasts of the security he says he has Telegram:

“In almost 6 years of its existence, Telegram did not have any major data leakage or security flaws of the kind WhatsApp demonstrates every few months. In the same 6 years, we revealed exactly zero bytes of data to third parties, while Facebook/WhatsApp has been sharing almost everything with everyone who claimed to work for a government.

“It’s either us or Facebook’s monopoly.”

Finally, the head of Telegram says that in recent times WhatsApp copies functions from its application, even says that Mark Zuckerberg wants to appropriate the philosophy of its platform around privacy and speed, but that complaining about it will not help them and must admit that they are carrying out an intelligent strategy. And he recalls the case of Snapchat and the emulation of part of its functioning.

“At Telegram we have to recognize our responsibility in shaping the future. It’s either us or Facebook’s monopoly. It’s freedom and privacy or greed and hypocrisy. Our team has been competing with Facebook for the last 13 years. We already beat them once, in the Eastern European social networking market. We will beat them again in the global messaging market. We have to do it.

Durov acknowledges that it won’t be easy, but he appeals to his users. “If you like Telegram enough, you will tell your friends. And if every Telegram user convinces 3 of their friends to remove WhatsApp and move permanently to Telegram, Telegram will already be more popular than WhatsApp,” he explains. “The era of greed and hypocrisy will end. An era of freedom and privacy will begin. It is much closer than it seems.

A teenager commits suicide after taking a poll in Instagram where the majority voted to do so.

A 16-year-old Malaysian teenager took her own life after asking her Instagram followers whether she should die or not. In a survey published on the social network, 69% of voters said yes for an answer, triggering the young woman’s reaction.

According to The Guardian, the Malaysian police, who wanted to maintain the anonymity of the young woman, stated that the survey contained the following message: “Really important, help me to choose D [death] / L [Life]”.

Instagram and its impact on adolescent behavior

Lawyers for the case began to suggest that those who voted yes might be guilty of inciting suicide, putting on the table what would have happened if they had not encouraged her to die.

On the other hand, Malaysia’s own Minister of Youth and Sports stated that he was concerned about the mental health of young people, raising the need for attention to be paid to the issue at the national level.

“I am really concerned about the mental health status of our young people. It is a national issue that must be taken seriously. Syed Saddiq, Minister of Youth and Sports, Malaysia.

In February, Instagram acknowledged that publications on self-harm should be regulated following the suicide of Molly Russell, a 14-year-old English teenager who committed suicide in 2017. In this case, the girl’s parents argued that she frequently consumed self-harm content, and that this helped kill her daughter. After this case, the application began to block this type of images on the social network.

After the news, it is again on the table how social networks affect behavior, especially in the case of teenagers. Instagram already announced that it was studying to remove the “Me Gusta” from public view, in search of “alleviating the pressure” that is currently suffered in some Instagram accounts, and that so many ends up influencing the content that we upload (or even delete) in the social network.

San Francisco is the first U.S. city to ban facial recognition surveillance

We have already seen how in some countries they are using facial recognition to monitor entire cities or strategic points. China is one of the territories that are at the forefront with this technology, and were recently accused of using it to track a minority.

Instead, a few hours ago San Francisco became the first city in the United States to prohibit the use of facial recognition to different government agencies and agencies.

“We support the police, but not a police state.”

That means the city’s mayor won’t allow local agencies (including the Police) to use facial recognition techniques to identify criminals in public places.

With eight votes in favor and one against, they have managed to give the green light to this measure, which ensures that (unlike what we saw happening in China) the protection of minorities and the right to privacy should take precedence.

It is the first U.S. city to approve it, but it is expected that it will also be implemented in other regions of the country (at moment the city of Oakland and the state of Massachusetts are ringing loudly).

This legislation was drafted by supervisor Aaron Peskin, who claims that “they support a good police force, but none of us want to live in a police state. Peskin believes this technology is “invasive” and that San Francisco has a “responsibility” to set an example:

“I think San Francisco has a responsibility to talk about the things that are affecting the whole world and are happening in our front yard.

We’ll have to see if Peskin’s wish becomes a reality and this law ends up being implemented in other parts of the country or the planet. It’s a measure, in a way, against the tide, in a world where it’s increasingly common to use this technology to keep an eye on society.

The latest rare Windows 10 bug: the most recent update is installed twice and no one knows why.

The 1809 version of Windows 10, also known as the problematic update of October 2018, has received one more cumulative update, and with it, another bug for the list that this time is quite rare: the update is installed twice.

Initially reported on several Reddit threads, Microsoft has posted the problem on their known bugs page, indicating that they are currently investigating the bug and will provide more information when it becomes available.

Basically it is not yet known why this is happening, and seems to be a common problem, according to multiple users in the Microsoft and Reddit photos. Everyone reports that when you check for updates with Windows Update and proceed to install the update, after the corresponding reboot to install, the update starts installing again.

It is an important cumulative update that we recommend to install, and “at least” it does not cause you to hang up like the previous one.

While the strange bug is being investigated, the good news is that at least this doesn’t seem to cause any additional problems in the system, except for the space occupied by the double update, and we have to see the good side, if we take into account that the previous cumulative update slowed the system and even caused crashes.

As part of the usual Tuesday patches, Windows 10 received the KB4494441 update this week, one that in addition to bringing security updates, includes better and bug fixes, as well as enabling “Reptoline” by default, one of the important mitigation for Spectra that improves performance, so it is highly recommended that you install it.