Skip to content

Migrate from JFrog Artifactory

This guide covers moving an APT repository from JFrog Artifactory to Repod. It uses Artifactory's REST API and Artifactory Query Language (AQL) to enumerate and download every .deb package, then imports them into Repod with a bulk upload script.

Artifactory CE vs Repod

Artifactory Community Edition requires a JFrog account for activation and phones home for telemetry. Repod is fully self-contained: no account, no licence server, no outbound calls required. It stores everything in Docker volumes on your own infrastructure.


1. When to migrate

This guide is the right approach when:

  • Artifactory is used primarily or exclusively as an APT mirror or internal package host, and the complexity of its multi-format model is no longer justified.
  • Your team wants a dedicated APT workflow with built-in CVE scanning and manifest generation instead of maintaining generic artifact storage.
  • You want to reduce licensing costs: Artifactory Pro or Enterprise licences are expensive; Repod is open-source and self-hosted.

You can run Repod and Artifactory side by side during the transition and migrate repositories one distribution at a time.


2. Before you start — inventory checklist

Gather this information before making any changes:

  • Artifactory base URL (e.g. https://artifactory.example.com) and admin credentials.
  • Names of all APT local repositories (type Debian in Artifactory terminology).
  • Distributions and components defined in each repository's Debian settings.
  • Approximate package count and total disk size.
  • Number and location of client machines, and how sources.list is managed.
  • Whether Artifactory enforces authentication on the read path (anonymous access vs. deployment tokens).

Tip

In Artifactory, navigate to Administration → Repositories → Repositories and filter by type "Local" and package type "Debian". Screenshot or export the list as your migration tracking sheet.


3. Step 1 — List packages with AQL

Artifactory's Storage API can list files recursively, but Artifactory Query Language (AQL) gives you precise filtering and pagination with far less response overhead. Use AQL to enumerate every .deb file in your repository before downloading anything.

#!/usr/bin/env bash
# artifactory-list.sh — list all .deb files in an Artifactory local repo via AQL
set -euo pipefail

ART_URL="https://artifactory.example.com"
REPO_NAME="apt-local"
ART_USER="admin"
ART_PASS="changeme"

curl -s -u "${ART_USER}:${ART_PASS}" \
  -X POST "${ART_URL}/artifactory/api/search/aql" \
  -H "Content-Type: text/plain" \
  -d "items.find({
        \"repo\": \"${REPO_NAME}\",
        \"name\": {\"\$match\": \"*.deb\"}
      }).include(\"repo\", \"path\", \"name\", \"size\", \"sha256\")" \
  | jq -r '.results[] | "\(.repo)/\(.path)/\(.name)"'

Save the output to a file — it becomes the download manifest:

chmod +x artifactory-list.sh
./artifactory-list.sh > package-manifest.txt
wc -l package-manifest.txt   # confirm total count matches your inventory

Alternatively, use the simpler Storage API for a quick directory listing (no pagination, suitable for smaller repositories):

curl -s -u "${ART_USER}:${ART_PASS}" \
  "${ART_URL}/artifactory/api/storage/${REPO_NAME}?list&deep=1&listFolders=0" \
  | jq -r '.files[].uri | select(endswith(".deb"))'

Warning

The Storage API ?list&deep=1 endpoint loads the entire repository tree into memory on the Artifactory server before returning. On repositories with tens of thousands of files this can be slow or time out. Prefer AQL for large repositories.


4. Step 2 — Download .deb files from Artifactory

With package-manifest.txt in hand, download every file to a local staging directory:

#!/usr/bin/env bash
# artifactory-export.sh — download .deb files listed in package-manifest.txt
set -euo pipefail

ART_URL="https://artifactory.example.com"
ART_USER="admin"
ART_PASS="changeme"
MANIFEST="./package-manifest.txt"
OUT_DIR="./artifactory-export"

mkdir -p "$OUT_DIR"

while IFS= read -r path; do
  filename=$(basename "$path")
  dest="${OUT_DIR}/${filename}"

  if [[ -f "$dest" ]]; then
    echo "Already exists, skipping: ${filename}"
    continue
  fi

  echo -n "Downloading ${filename} ... "
  http_code=$(curl -s -L -u "${ART_USER}:${ART_PASS}" \
    -o "$dest" \
    -w "%{http_code}" \
    "${ART_URL}/artifactory/${path}")

  if [[ "$http_code" == "200" ]]; then
    echo "OK"
  else
    echo "FAILED (HTTP ${http_code})"
    rm -f "$dest"
  fi
done < "$MANIFEST"

echo "Download complete. Files in ${OUT_DIR}/"

The direct download URL pattern for individual packages is:

GET /artifactory/apt-local/pool/main/p/package_1.0_amd64.deb

The repository name, component path, and filename come from the AQL results stored in package-manifest.txt.

Tip

Add --retry 3 --retry-delay 5 to the curl flags if your network connection to Artifactory is unstable or if you are downloading over a VPN.


5. Step 3 — Set up Repod

If you do not already have Repod running, follow the Getting Started guide and return here once:

  • The Docker Compose stack is up (frontend :3003, backend :8000, nginx :80).
  • A GPG signing key has been generated in Settings → GPG.
  • You have an API token (prefix repod_) from Settings → API Tokens.

6. Step 4 — Bulk import into Repod

Upload the downloaded packages to Repod. The backend enforces a rate limit of 20 uploads per minute; the script below paces itself accordingly.

#!/usr/bin/env bash
# repod-import.sh — bulk upload .deb files to Repod
set -euo pipefail

REPOD_URL="http://repod.example.com"
API_TOKEN="repod_xxxxxxxxxxxxxxxx"
DEB_DIR="./artifactory-export"
DISTRIBUTION="jammy"
COMPONENT="main"
UPLOAD_DELAY=3   # 20/min rate limit → 3s between uploads

success=0
failed=0

for deb in "${DEB_DIR}"/*.deb; do
  filename=$(basename "$deb")
  echo -n "Uploading ${filename} ... "

  http_code=$(curl -s -o /tmp/repod_response.json -w "%{http_code}" \
    -X POST "${REPOD_URL}/upload/" \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -F "file=@${deb}" \
    -F "distribution=${DISTRIBUTION}" \
    -F "component=${COMPONENT}")

  if [[ "$http_code" == "200" || "$http_code" == "201" ]]; then
    echo "OK"
    ((success++))
  else
    echo "FAILED (HTTP ${http_code})"
    cat /tmp/repod_response.json
    ((failed++))
  fi

  sleep "$UPLOAD_DELAY"
done

echo ""
echo "Done. Success: ${success}  Failed: ${failed}"

Run in a persistent terminal session:

chmod +x repod-import.sh
tmux new-session -s repod-import './repod-import.sh | tee import.log'

7. Step 5 — Verify

Compare counts and run a canary test before touching production clients.

# Count downloaded files
ls artifactory-export/*.deb | wc -l

# Count packages now in Repod
curl -s -H "Authorization: Bearer ${API_TOKEN}" \
  "${REPOD_URL}/packages/?distribution=jammy" | jq '.total'

Test on an isolated canary machine:

echo "deb [signed-by=/etc/apt/trusted.gpg.d/repod.gpg] \
  http://repod.example.com jammy main" \
  | sudo tee /etc/apt/sources.list.d/repod-test.list

curl -fsSL http://repod.example.com/gpg.key \
  | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/repod.gpg

sudo apt update && sudo apt install your-internal-package

8. Step 6 — Update client machines

Roll out the new APT source to all clients using your configuration management tooling.

sudo rm /etc/apt/sources.list.d/artifactory.list

echo "deb [signed-by=/etc/apt/trusted.gpg.d/repod.gpg] \
  http://repod.example.com jammy main" \
  | sudo tee /etc/apt/sources.list.d/repod.list

curl -fsSL http://repod.example.com/gpg.key \
  | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/repod.gpg

sudo apt update
- name: Remove Artifactory APT source
  ansible.builtin.file:
    path: /etc/apt/sources.list.d/artifactory.list
    state: absent

- name: Download Repod GPG key
  ansible.builtin.get_url:
    url: http://repod.example.com/gpg.key
    dest: /tmp/repod.gpg.asc

- name: Install dearmored GPG key
  ansible.builtin.command:
    cmd: gpg --dearmor -o /etc/apt/trusted.gpg.d/repod.gpg /tmp/repod.gpg.asc
    creates: /etc/apt/trusted.gpg.d/repod.gpg

- name: Add Repod APT source
  ansible.builtin.apt_repository:
    repo: "deb [signed-by=/etc/apt/trusted.gpg.d/repod.gpg] http://repod.example.com jammy main"
    state: present
    filename: repod

9. Step 7 — Cut over

Point the canonical APT hostname at Repod via a DNS change or reverse proxy update. Verify with sudo apt update across multiple machines from different network segments before declaring success.


10. Rollback plan

Keep Artifactory running at its original URL for at least two weeks after cut-over. To roll back, revert the DNS record or proxy upstream. No client changes are needed — apt update will start pulling from Artifactory again automatically.

Warning

Do not decommission Artifactory until Repod has served production traffic successfully for at least two weeks and you have verified that all package versions are present and installable.


11. Common issues

AQL returns no results

Confirm the repository name in the AQL query matches exactly (case-sensitive). In Artifactory, open Administration → Repositories, find your APT repo, and copy the "Repository Key" value verbatim.

401 Unauthorized on downloads

Artifactory access tokens and passwords are separate. If you created an access token in the UI, pass it as a Bearer token rather than HTTP Basic:

curl -H "Authorization: Bearer ${ART_TOKEN}" ...

Packages land in the wrong distribution

Artifactory stores Debian packages with distribution metadata in their properties. Repod determines the distribution from the upload parameters (-F "distribution=...") rather than reading the .deb control file. Ensure your import script targets the correct distribution name.

Large files timing out on upload

Increase client_max_body_size in the Repod Nginx configuration before running the import. Very large packages may also need a longer FastAPI request timeout — set --timeout-keep-alive in the backend service definition in docker-compose.yaml.