Corefinity - Notice history

Google Cloud Platform Stacks (GCP) - Operational

100% - uptime
Feb 2023 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2023
Mar 2023
Apr 2023

Amazon Web Services Stacks (AWS) - Operational

100% - uptime
Feb 2023 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2023
Mar 2023
Apr 2023

Microsoft Azure Stacks - Operational

100% - uptime
Feb 2023 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2023
Mar 2023
Apr 2023

Civo Stacks - Operational

100% - uptime
Feb 2023 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2023
Mar 2023
Apr 2023

Control Panel (manage.corefinity.com) - Operational

100% - uptime
Feb 2023 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2023
Mar 2023
Apr 2023

Deployment Pipelines - Operational

100% - uptime
Feb 2023 · 100.0%Mar · 100.0%Apr · 100.0%
Feb 2023
Mar 2023
Apr 2023

Notice history

Apr 2023

No notices reported this month

Mar 2023

No notices reported this month

Feb 2023

Corefinity investigating high loads across a number of GCP environments
  • Resolved
    Resolved

    We have also found a huge (3x to 5x) spike in traffic across many of our Magento 2 applications at the same time as this incident. We are conducting a separate investigation into this however there is no cause for concern.

    We have implemented a fix and currently monitoring the result. Everything is healthy and green.

    We will provide further updates.

  • Identified
    Identified

    We have identified the root cause of the issue and unfortunately it is completely unrelated to the recent events we have had although it has most likely contributed to the high load and increased the impact.

    We've had an immediate response to our P1 request with GCP and the root cause of the issue has been identified as high usage across majority of client nodes by a process named "/home/kubernetes/bin/gcfsd" - This process is a GCP managed process providing virtual mounts to the servers.

    The below line shows this process using around 8 cores of CPU (Peaking to 20 and 22 cores on other clients).

    2114 root 20 0 23.5g 15.9g 21088 S 757.0 8.5 1962:35 /home/kubernetes/bin/gcfsd --mountpoint=/run/gcfsd/mnt --maxcontentcachesizemb=213 --maxlargefilescachesizemb=213 --layercachedir=/var/lib/containerd/io.containerd.snap+

    We are working on an immediate mitigation action at the moment and another update will be provided within 10 minutes.

  • Investigating
    Investigating

    We are currently investigating high loads across a number of GCP servers and environments.
    We will provide another update within 30 minutes.

GCP Incident
  • Resolved
    Resolved

    GCP Incident has been resolved.
    The frontend impact of this incident was minimal to users.
    Please follow https://status.corefinity.com/cldq0aue114067rfnex131443l for updates on NFS degradation as once that rollout is complete, events such as these will no longer cause any impact to users.

  • Investigating
    Investigating

    Unfortunately we are currently dealing with a join incident on the GCP Stacks, whereby a critical security update released overnight is being pushed to all Kubernetes infrastructure by GCP.

    Corefinity offers a completely redundant and scaleable stack and in normal times these updates are pushed silently with no impact to users at all, unfortunately due the incident currently ongoing with NFS degradation (https://status.corefinity.com/cldq0aue114067rfnex131443l), we are seeing small dropouts exclusively on GCP stacks.

    We are working closely with GCP Cloud Toyko and GCP Cloud UK in order to pause the rollout of security updates until this incident is over.

    Corefinity engineering team is now also expediting our rollout of the NFS degradation fix and we estimate the rollout to be complete within 24 hours to all stacks.

Feb 2023 to Apr 2023

Next