Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie

Ransomware & HSE

Options
1888991939496

Comments

  • Registered Users Posts: 12,262 ✭✭✭✭Flinty997


    Because there is no one system that meets the end-to-end need of the health service. And if there was one, the cost would be so astronomical that we'd never be able to afford it.

    Because different medical devices come with their own proprietary systems.

    Because if you try to control every single software purchase and ensure that you're buying the software that meets the needs of ALL users, you freeze the system into not buying anything for a couple of years.

    Exactly. Most people here are home uses with no experience of large IT systems.


  • Registered Users Posts: 29,114 ✭✭✭✭AndrewJRenko


    How long do blood tests from a G.P take due to the cyber hack. Got my bloods taken recently and curious about the new timeframe.
    I went through health screening with an occ health provider recently. I got the full report back including blood test results in two days.


  • Registered Users Posts: 536 ✭✭✭mrjoneill


    back in the day HP and the likes thought the money was in hardware! then boom.... Bill Gates fooled them all!
    Back in the "day" processor and memory was expensive" and software was for to share in this very expensive hardware.


  • Moderators, Politics Moderators Posts: 39,871 Mod ✭✭✭✭Seth Brundle


    How long do blood tests from a G.P take due to the cyber hack. Got my bloods taken recently and curious about the new timeframe.
    I went through health screening with an occ health provider recently. I got the full report back including blood test results in two days.
    My GP is unable to do blood tests because the system for receiving the results is still down.
    My advice would be to call your GP and see.


  • Registered Users Posts: 598 ✭✭✭pioneerpro


    Is there any chance that these hackers can be caught at all? Or would they have covered their tracks too well?

    They're almost certainly already caught/compromised depending on what competent organisations they targeted. Case in point, the recent Ransomware pipeline attack. The group responsible were caught and cut off at the head in less than a week. All funds seized, all CnC servers gone.

    https://krebsonsecurity.com/2021/05/darkside-ransomware-gang-quits-after-servers-bitcoin-stash-seized/

    The problem here wasn't the Ransomware attack per se - it was the complete lack of mitigation against the attack present. That wouldn't even be so irretrievable excepting for the fact they had no appropriate formal policies in place for this extremely predictable incident.

    In short, even though they have the decryption keys, a lot of systems were rendered irretrievable as they turned them off mid-encryption. The single worst thing you can do. Video Games literally tell you not to do this when saving your progress. It's Computer Science 101, and is unacceptable from a state organisation in this day and age.

    AND, just to clarify before people start jumping in with strong opinions about predictability and 0day exploits - The issue here is not one of predicting and mitigating a novel exploit, but rather having appropriate high-availability failover and data redundancy plans suitable for a critical national body.

    Almost every SaaS industry has a requirement for things like 'Five 9s' uptime and automatic failover of critical core systems. That the health-service doesn't is an absolutely scathing indictment of its treatment of Healthcare IT as a cost-center over the last 20 years, rather than as the core provisioning service that it represents.

    In any case, this sort of non-targeted ransomware attack being anything other than a 'wipe and restore from last night's backup' is *always* the fault of:

    * Poor control and granularity of user permissions
    * Escalation of privilege above the use-case of the system
    * Lack of monitoring in file-handles
    * Lack of a network 'DMZ' to prevent the spread
    * Lack of battle-tested fail-over and backup/restore procedures

    In short, they done goofed.


  • Advertisement
  • Registered Users Posts: 7,256 ✭✭✭plodder


    https://www.rte.ie/news/politics/2021/0622/1230770-hse-oireachtas-committee/
    At least three quarters of the Health Service Executive's IT servers have been decrypted and 70% of computer devices are back in use, following last month's cyber attack.
    Does server decrypted mean back working? Possibly not. But 70% of devices back in use is good news.


  • Registered Users Posts: 18,602 ✭✭✭✭kippy


    pioneerpro wrote: »
    They're almost certainly already caught/compromised depending on what competent organisations they targeted. Case in point, the recent Ransomware pipeline attack. The group responsible were caught and cut off at the head in less than a week. All funds seized, all CnC servers gone.

    https://krebsonsecurity.com/2021/05/darkside-ransomware-gang-quits-after-servers-bitcoin-stash-seized/

    The problem here wasn't the Ransomware attack per se - it was the complete lack of mitigation against the attack present. That wouldn't even be so irretrievable excepting for the fact they had no appropriate formal policies in place for this extremely predictable incident.

    In short, even though they have the decryption keys, a lot of systems were rendered irretrievable as they turned them off mid-encryption. The single worst thing you can do. Video Games literally tell you not to do this when saving your progress. It's Computer Science 101, and is unacceptable from a state organisation in this day and age.

    AND, just to clarify before people start jumping in with strong opinions about predictability and 0day exploits - The issue here is not one of predicting and mitigating a novel exploit, but rather having appropriate high-availability failover and data redundancy plans suitable for a critical national body.

    Almost every SaaS industry has a requirement for things like 'Five 9s' uptime and automatic failover of critical core systems. That the health-service doesn't is an absolutely scathing indictment of its treatment of Healthcare IT as a cost-center over the last 20 years, rather than as the core provisioning service that it represents.

    In any case, this sort of non-targeted ransomware attack being anything other than a 'wipe and restore from last night's backup' is *always* the fault of:

    * Poor control and granularity of user permissions
    * Escalation of privilege above the use-case of the system
    * Lack of monitoring in file-handles
    * Lack of a network 'DMZ' to prevent the spread
    * Lack of battle-tested fail-over and backup/restore procedures

    In short, they done goofed.


    Look - you have good points - yeah, its highlly likely more could have been done to negate the impact of this attack (as in the case of any IT breach) and blame does lie with the organisation and the environment that allowed this to happen but lets not suggest that its "always" the fault of the organisation or what the organisation did or did not do.
    It's almost a never ending resource drain to ensure you have systems that are bulletproof and even then...........


    It's the fault of the perpatrators and those that condone their actions ultimately.

    In relation to powering off devices that were mid encryption - very very little could have been done on this angle when you realise the amount of devices involved.


  • Registered Users Posts: 598 ✭✭✭pioneerpro


    kippy wrote: »
    Look - you have good points - yeah, its highlly likely more could have been done to negate the impact of this attack (as in the case of any IT breach) and blame does lie with the organisation and the environment that allowed this to happen but lets not suggest that its "always" the fault of the organisation or what the organisation did or did not do.
    It's almost a never ending resource drain to ensure you have systems that are bulletproof and even then...........

    No, I'm sorry, it's simply not the case.

    How it originally became resident in the system isn't really of relevance - see previous comments about 0 days. It's about the subsequent inability to mitigate its impact on a production system due to lack of quarantined failover/backup and established and practiced processes.

    Take this simple flow model
    The categorisation was done in respect to the stages of ransomware deployment methods with a predictive model we developed called Randep. The stages are fingerprint, propagate, communicate, map, encrypt, lock, delete and threaten.
    https://crimesciencejournal.biomedce...163-019-0097-9

    The map/encrypt/lock part of the flow is the bit where it should have been caught. File-handle monitoring and examination of memory-resident applications and their I/O processes should have triggered an alarm - at which point simply switching the relevant network-shares/file-systems to read-only would have completely stopped propagation of this worm in its tracks.

    If you don't have the privileges and the ability to encrypt files on the file system, then the attack simply can't happen.

    Fair enough, if it's not caught it's not caught. But what's the failover plan in this case? And was it sufficient? It seems not.


    Indeed, the patient zero in these cases generally turn out to be a generic phishing eMail opened in a WFH scenario by someone with inappropriate system privileges, rather than a targeted spear-phishing or social engineering attack. I won't blame them - people are people and not everyone is tech-savvy to the point where they're immune to these. I blame the processes in place to protect them from themselves.

    Cybersecurity is basically about mitigating the unknown through best-practice recovery practices - having the ability to quarantine and restore from a known good backup with little or no downtime, rather than having fingerprinting and eliminating a series of patched binaries in a live system.

    That said, I wish the contractors fixing it all the best. It's a brutal situation to be in.


  • Registered Users Posts: 18,602 ✭✭✭✭kippy


    pioneerpro wrote: »
    No, I'm sorry, it's simply not the case.

    How it originally became resident in the system isn't really of relevance - see previous comments about 0 days. It's about the subsequent inability to mitigate its impact on a production system due to lack of quarantined failover/backup and established and practiced processes.

    Take this simple flow model


    https://crimesciencejournal.biomedce...163-019-0097-9

    The map/encrypt/lock part of the flow is the bit where it should have been caught. File-handle monitoring and examination of memory-resident applications and their I/O processes should have triggered an alarm - at which point simply switching the relevant network-shares/file-systems to read-only would have completely stopped propagation of this worm in its tracks.

    If you don't have the privileges and the ability to encrypt files on the file system, then the attack simply can't happen.

    Fair enough, if it's not caught it's not caught. But what's the failover plan in this case? And was it sufficient? It seems not.


    Indeed, the patient zero in these cases generally turn out to be a generic phishing eMail opened in a WFH scenario by someone with inappropriate system privileges, rather than a targeted spear-phishing or social engineering attack. I won't blame them - people are people and not everyone is tech-savvy to the point where they're immune to these. I blame the processes in place to protect them from themselves.

    Cybersecurity is basically about mitigating the unknown through best-practice recovery practices - having the ability to quarantine and restore from a known good backup with little or no downtime, rather than having fingerprinting and eliminating a series of patched binaries in a live system.

    That said, I wish the contractors fixing it all the best. It's a brutal situation to be in.

    I am sorry, but it simply is the case.
    All of these mitigations and steps require resources to implement, test and manage - these resources increase drasticully when you are dealing with a large, complex organisation with a wide array of systems, environments and legacy issues.



    Again - you take all blame off those that actually carried out the attack - but sure lookit that's sound.

    And again, I have no doubt that some processes failed (obviously) for this to happen in the first instance


  • Registered Users Posts: 598 ✭✭✭pioneerpro


    kippy wrote: »
    I am sorry, but it simply is the case.
    All of these mitigations and steps require resources to implement, test and manage - these resources increase drasticully when you are dealing with a large, complex organisation with a wide array of systems, environments and legacy issues.

    Again - you take all blame off those that actually carried out the attack - but sure lookit that's sound.

    And again, I have no doubt that some processes failed (obviously) for this to happen in the first instance

    I'm afraid you're not convincing me you're speaking from a position of experience or insight in relation to this.

    The development systems I deal with are significantly larger and more complex than the HSE. The production systems in the field absolutely dwarf them - and the integrated historical legacy systems go back as far as COBOL. There's no argument to be made here that doesn't come back to lack of processes and failover procedures, whatever caveats you may make about the expense and complexity of such an endeavour.

    If we don't do our job, mortality rates don't spike, and yet we still have better processes in place than the HSE with far trickier and low-level optimised systems.

    You know when our (multi-million) dollar penalties kick-in? “5 nines uptime” is the contract. That means that a system is fully operational 99.999% of the time - so we have an average of less than 6 minutes downtime per year; and that's inclusive of upgrades and critical security patches (e.g. Shellshock, Heartbleed etc...). Anything over that and its basically game over.


  • Advertisement
  • Registered Users Posts: 18,602 ✭✭✭✭kippy


    pioneerpro wrote: »
    I'm afraid you're not convincing me you're speaking from a position of experience or insight in relation to this.

    The development systems I deal with are significantly larger and more complex than the HSE. The production systems in the field absolutely dwarf them - and the integrated historical legacy systems go back as far as COBOL. There's no argument to be made here that doesn't come back to lack of processes and failover procedures, whatever caveats you may make about the expense and complexity of such an endeavour.

    If we don't do our job, mortality rates don't spike, and yet we still have better processes in place than the HSE with far trickier and low-level optimised systems.

    You know when our (multi-million) dollar penalties kick-in? “5 nines uptime” is the contract. That means that a system is fully operational 99.999% of the time - so we have an average of less than 6 minutes downtime per year; and that's inclusive of upgrades and critical security patches (e.g. Shellshock, Heartbleed etc...). Anything over that and its basically game over.

    You obviously know what you're talking about.
    HSE completely to blame. No blame lies anywhere else.


  • Banned (with Prison Access) Posts: 989 ✭✭✭ineedeuro


    kippy wrote: »
    You obviously know what you're talking about.
    HSE completely to blame. No blame lies anywhere else.

    Nobody said that. You seem to think the hSE deserve no criticism and along with another few on here seem hell bent on making excuses.

    Sorry but the HSE deserve criticism and should be made answer questions.


  • Registered Users Posts: 598 ✭✭✭pioneerpro


    kippy wrote: »
    You obviously know what you're talking about.

    I'll leave that to the domain experts on the thread to attest/contest.
    HSE completely to blame. No blame lies anywhere else.

    If there was an arson attack and they found out the fire-doors were blocked and the fire alarms weren't fit for purpose, they wouldn't be absolved of blame for the subsequent loss of life. Serious questions about their processes and procedures would be asked as a matter of course. Talking about the size of the building or the age of the equipment as some sort of mitigating excuse wouldn't even come into consideration.

    It's my position that this far less-targeted and much more predictable attack should be evaluated in the same context. I don't think that's an extreme or incomprehensible position to be taking.


  • Registered Users Posts: 5,993 ✭✭✭Cordell


    HSE are definitely at fault for not having proper security, but this was an attack, not an accident, let's not forget that.


  • Banned (with Prison Access) Posts: 989 ✭✭✭ineedeuro


    Cordell wrote: »
    HSE are definitely at fault for not having proper security, but this was an attack, not an accident, let's not forget that.

    Again nobody said it wasn't an attack.

    The concern I would have is the HSE are paying substantial tax payer money to a range of people in the HSE and then external companies and from what we can gather the door was left wide open with nobody standing guard.

    As i said if this was a first of its kind then you could argue they wouldn't be aware but the NHS had similar happen in 2017 so they had to know this was a danger.


  • Registered Users Posts: 598 ✭✭✭pioneerpro


    Cordell wrote: »
    HSE are definitely at fault for not having proper security, but this was an attack, not an accident, let's not forget that.

    With respect, I didn't say arson accident above. It's really irrelevant in the context of the overall discussion. Comes down to processes and procedures for predictable events - and this was demonstrably and explicitly predictable.

    I wouldn't mind, but the hackers themselves bent over backwards almost immediately after the fact to prevent loss of life - they kept the demands strictly tied to personal information.

    We'd be back in business fully if the very basic first rule of ransomware attacks were adhered to - DON'T TURN OFF THE AFFECTED SYSTEMS MID-ENCRYPTION UNLESS YOU HAVE A TESTED FAILOVER.

    Now, even with the key, we're in serious serious trouble.

    https://www.bbc.com/news/world-europe-57197688
    Hackers responsible for causing widespread disruption to the Irish health system have unexpectedly gifted it with the tool to help it recover.

    The Conti ransomware group was reportedly asking the health service for $20m (£14m) to restore services after the "catastrophic hack".

    But now the criminals have handed over the software tool for free.

    The Irish government says it is testing the tool and insists it did not, and would not, be paying the hackers.

    Taoiseach (Irish prime minister) Micheál Martin said on Friday evening that getting the software tool was good, but that enormous work is still required to rebuild the system overall.

    Conti is still threatening to publish or sell data it has stolen unless a ransom is paid.

    On its darknet website, it told the Health Service Executive (HSE), which runs Ireland's healthcare system, that "we are providing the decryption tool for your network for free".

    "But you should understand that we will sell or publish a lot of private data if you will not connect us and try to resolve the situation."

    It was unclear why the hackers gave the tool - known as a decryption key - for free, said Health Minister Stephen Donnelly.

    "No ransom has been paid by this government directly, indirectly, through any third party or any other way. Nor will any such ransom be paid," he told Irish broadcaster RTÉ.


  • Banned (with Prison Access) Posts: 989 ✭✭✭ineedeuro


    pioneerpro wrote: »
    With respect, I didn't say arson accident above. It's really irrelevant in the context of the overall discussion. Comes down to processes and procedures for predictable events - and this was demonstrably and explicitly predictable.

    I wouldn't mind, but the hackers themselves bent over backwards almost immediately after the fact to prevent loss of life - they kept the demands strictly tied to personal information.

    We'd be back in business fully if the very basic first rule of ransomware attacks were adhered to - DON'T TURN OFF THE AFFECTED SYSTEMS MID-ENCRYPTION UNLESS YOU HAVE A TESTED FAILOVER.

    Now, even with the key, we're in serious serious trouble.

    https://www.bbc.com/news/world-europe-57197688

    I am no expert but the other issue I heard, they shut everything down blindly. Once everything was turned off they had no idea what systems had been infected and which hadn't. This caused huge issues as they didn't know what to start turning back on and how to turn them on.

    This would suggest they hadn't a plan in place in case this happened, if a ransomware happened we 1, 2, 3, 4, etc etc
    Something that would be fairly basic for any orgainzation.


  • Registered Users Posts: 5,993 ✭✭✭Cordell


    pioneerpro wrote: »
    the hackers themselves bent over backwards almost immediately after the fact to prevent loss of life

    What a great bunch of lads.
    Loss of life is absolutely going to happen and they were ok with that when they planned the attack. They had a choice when they had the backdoor open, encrypt, steal, or both. They did both.


  • Registered Users Posts: 598 ✭✭✭pioneerpro


    ineedeuro wrote: »
    I am no expert but the other issue I heard, they shut everything down blindly. Once everything was turned off they had no idea what systems had been infected and which hadn't. This caused huge issues as they didn't know what to start turning back on and how to turn them on.

    This would suggest they hadn't a plan in place in case this happened, if a ransomware happened we 1, 2, 3, 4, etc etc
    Something that would be fairly basic for any orgainzation.

    This is the core of the problem exactly. They rendered significant chunks of their data irretrievable due to panicking and lack of formally defined and tested procedures - turning an accident into a tragedy.

    We need to ensure this *never* happens again, and that starts with accountability and transparency.


  • Moderators, Politics Moderators Posts: 39,871 Mod ✭✭✭✭Seth Brundle


    ineedeuro wrote: »
    I am no expert but the other issue I heard, they shut everything down blindly. Once everything was turned off they had no idea what systems had been infected and which hadn't. This caused huge issues as they didn't know what to start turning back on and how to turn them on.
    Should they have left everything on and allow any unaffected systems to be infected?


  • Advertisement
  • Registered Users Posts: 18,602 ✭✭✭✭kippy


    pioneerpro wrote: »
    I'll leave that to the domain experts on the thread to attest/contest.



    If there was an arson attack and they found out the fire-doors were blocked and the fire alarms weren't fit for purpose, they wouldn't be absolved of blame for the subsequent loss of life. Serious questions about their processes and procedures would be asked as a matter of course. Talking about the size of the building or the age of the equipment as some sort of mitigating excuse wouldn't even come into consideration.

    It's my position that this far less-targeted and much more predictable attack should be evaluated in the same context. I don't think that's an extreme or incomprehensible position to be taking.

    I don't disagree - and I've never stated that the HSE are completely beyond reproach when it comes to this incident. It is obvious there were failings at a number of points in the timelines but at the same time, one cannot assume that the way to stop this happening in future with such dire consequences is to throw money and resources at the HSE/infrastructure itself whether it be preventative or reactionary - that is a never ended resource pit which guarantees very little.
    The root cuase of the issue needs to be irradicated.


  • Banned (with Prison Access) Posts: 989 ✭✭✭ineedeuro


    Should they have left everything on and allow any unaffected systems to be infected?

    Or pulled the plug to the internet? shut down all external connectivity
    Shut down core switch to stop network traffic
    Then identify the infected system and isolate them

    Just an idea of course.
    Of course as I suggested they should of had a plan for exactly what they would do. Not sure how you or me would come up with that plan on a forum.


  • Moderators, Politics Moderators Posts: 39,871 Mod ✭✭✭✭Seth Brundle


    You criticise them for shutting everything down yet when I ask your solution is this ^^^
    Leave all internal systems up and running allowing the infection to spread even more?
    Really?

    Now I'm not saying that how it was handled within the HSE from start to finish has been anywhere near textbook but you're just throwing stuff out there which is complete crap!


  • Registered Users Posts: 598 ✭✭✭pioneerpro


    You criticise them for shutting everything down yet when I ask your solution is this ^^^
    Leave all internal systems up and running allowing the infection to spread even more?
    Really?

    Yes
    pioneerpro wrote:
    We'd be back in business fully if the very basic first rule of ransomware attacks were adhered to - DON'T TURN OFF THE AFFECTED SYSTEMS MID-ENCRYPTION UNLESS YOU HAVE A TESTED FAILOVER.

    There's isolating components and then there's graceless-shutdowns mid-encryption cycle.

    If you knew what you were talking about you'd understand the difference.

    You don't, so you won't, so you'll continue throwing mud.


  • Banned (with Prison Access) Posts: 989 ✭✭✭ineedeuro


    You criticise them for shutting everything down yet when I ask your solution is this ^^^
    Leave all internal systems up and running allowing the infection to spread even more?
    Really?

    Now I'm not saying that how it was handled within the HSE from start to finish has been anywhere near textbook but you're just throwing stuff out there which is complete crap!

    OK let me explain my "complete crap"
    Shutting down the system you end up not know what is/isn't affected. Also you risk corrupting the ones that are already affected so if you do get a key it is useless.

    Disconnecting to external means the hackers are out of your system. Shutting down the network means the virus cannot spread. Identifying and isolating the infected systems means you can keep the unaffected system available and bring back online quicker.


  • Registered Users Posts: 12,262 ✭✭✭✭Flinty997


    pioneerpro wrote: »
    ...
    We'd be back in business fully if the very basic first rule of ransomware attacks were adhered to - DON'T TURN OFF THE AFFECTED SYSTEMS MID-ENCRYPTION UNLESS YOU HAVE A TESTED FAILOVER.

    Now, even with the key, we're in serious serious trouble.

    That isn't an automatic first rule. Its a rule in specific situations, and you only know that now with certainty, (in this case), with the benefit of hindsight. You might never get the key and instead of 50% of your data being unrecoverable, it would be 100%. Likewise if it was deleting or overwriting data.

    Another reason to leave it running is if you can run forensics on it as it runs. To find out more about it. But equally it might overwrite data needed for forensics, post attack. In which case you'd turn it off.

    So the answer is...it depends.


  • Registered Users Posts: 3,337 ✭✭✭Wombatman


    kippy wrote: »
    one cannot assume that the way to stop this happening in future with such dire consequences is to throw money and resources at the HSE/infrastructure itself whether it be preventative or reactionary - that is a never ended resource pit which guarantees very little.
    The root cuase of the issue needs to be irradicated.
    It is possible to invest in IT and Cybersecurity in a sensible way by following well established best practices. Many organisations do. Your point is fatalistic and defeatist, feeding into your incessant mantra "Ah sure, you can't blame the HSE. There is noting you can do to combat these nasty men".
    kippy wrote: »
    The root cuase of the issue needs to be irradicated.

    So you are saying the best way to stop these attacks, and inevitable costly fallout, it to somehow put a stop to crime?


  • Registered Users Posts: 598 ✭✭✭pioneerpro


    Flinty997 wrote: »
    That isn't an automatic first rule. Its a rule in specific situations, and you only know that now with certainty, (in this case), with the benefit of hindsight. You might never get the key and instead of 50% of your data being unrecoverable, it would be 100%. Likewise if it was deleting or overwriting data.

    Simply put, no, but there's no use arguing. If you want to talk about locking NFS shares, monitoring file handles, pruning the production Active Directory, locking in -memory processes, blacklisting the existing routing tables on certain ports for non-privileged users etc... we can go there.

    But ultimately there's no point. Why?

    Because failover to a known good backup post-isolation is the solution, and always is. Appropriate granularity of permissions to the use-cases of the systems would have prevented this outright in any case, as I've stated before.

    File-handle monitoring and examination of memory-resident applications and their I/O processes should have triggered an alarm - at which point simply switching the relevant network-shares/file-systems to read-only would have completely stopped propagation of this worm in its tracks.

    If you don't have the privileges and the ability to encrypt files on the file system, then the attack simply can't happen.


    They panicked and decided to pull the plug mid-encryption instead. Worst practice emotional and panicked reaction without any RAID assessment. I blame the lack of formal policies entirely for this.
    Another reason to leave it running is if you can run forensics on it as it runs. To find out more about it. But equally it might overwrite data needed for forensics, post attack. In which case you'd turn it off.

    So the answer is...it depends.

    You'll have to point out to me the best practice policies that suggest you *ever* do this on a production system. I've never heard of any in-house recovery team ever doing anything of the sort, nor would they have the time in any company I've worked for.

    How many years have you worked in System Architecture or Cybersecurity out of interest? You RHCA or equivalent?


  • Registered Users Posts: 8,184 ✭✭✭riclad


    They shut down the system to prevent further malware spreading
    or else to prevent backups being erased maybe.
    i just hope they increase the security of the whole system to stop future attacks .
    maybe the might switch to a more secure 24,7 backup system,maybe backup to the cloud ,
    So they have backups of all medical records everyday,
    eg if theres another hack they will have all the data avaidable to restore ,
    and its secure backup ,eg it cant be hacked or deleted by any potential hacker
    once a backup is complete it disconnects all connection to the network.Backups are maybe set to read only mode.
    at this point companys are being hacked every day,
    we do not hear about most of them unless it involves millions of users

    i think we need a new os that is built from the start with data security
    as the first objective.

    windows 10 is built on layers of old code so as to be compatible with old
    32bit apps


  • Advertisement
  • Registered Users Posts: 12,262 ✭✭✭✭Flinty997


    pioneerpro wrote: »
    Simply put, no, but there's no use arguing....

    Answer this simple question then. if they'd let the systems run until full encrypted as you suggest, but then didn't get the key. What then.


Advertisement