Compare commits

...

19 Commits

Author SHA1 Message Date
Josiah Glosson
ee9d4fad33 Add quotes around $env:JAVA_HOME_11_X64
Co-authored-by: Alejandro González <7822554+AlexTMjugador@users.noreply.github.com>
Signed-off-by: Josiah Glosson <soujournme@gmail.com>
2025-06-27 10:29:10 -05:00
Josiah Glosson
140b4060e6 Set JAVA_HOME to JAVA_HOME_11_X64 on Windows for theseus-release 2025-06-26 16:51:46 -05:00
Emma Alexia
6f03fae233 Clear owner's project cache after deleting organization (#3794)
* Clear owner's project cache after deleting organization

Fixes an issue where people would think their projects were deleted along with their organization, when this isn't actually the case.

* address PR review

* Update apps/labrinth/src/routes/v3/organizations.rs

Signed-off-by: Emma Alexia <wafflecoffee7@gmail.com>

* Fix lint

* actually fix lint

---------

Signed-off-by: Emma Alexia <wafflecoffee7@gmail.com>
2025-06-26 19:30:58 +00:00
Emma Alexia
22fc0c994d Remove new-projects channel emojis (#3843)
* Remove new-projects channel emojis

With new loaders, this functionality has become unwieldy. We don't have enough emoji slots in the server for the number of emojis we'd need for the loaders. It is easiest to simply format them in the same way knossos does.

Note: I forgor how the borrow checker works, this compiles but I'm sure it's gore to anyone who actually knows the difference between a string slice and a String, I come from Javaland though so pls forgive

* Rename func accordingly
2025-06-26 18:54:22 +00:00
Josiah Glosson
a1ccbc5757 Update Java to 11 with --release 8 (#3846) 2025-06-26 18:38:58 +00:00
Jai A
053cf10198 Update ads.txt again
(cherry picked from commit 67304fab25ed37ed3e39766dc9378ae3a760cb06)
2025-06-26 11:08:44 -07:00
Jai A
257efd8ad7 Update ads.txt
(cherry picked from commit 01037c184efad97e4b9f1a11c3e7e1eb21d44510)
2025-06-26 11:08:44 -07:00
Alejandro González
b75cfc063b Sign Windows Theseus binaries with DigiCert KeyLocker's cloud HSM (#3838)
* feat(ci): sign Windows Theseus bins with DigiCert KeyLocker cloud HSM

* perf(ci): speed up Jsign installation

* fix(ci): use absolute path to DigiCert client certificate

This should avoid errors related to Jsign not being able to find it
we've seen on CI.

* fix(ci): trim strange characters out from DigiCert credentials

* ci: another attempt at fixing Jsign errors

* chore: add comment mentioning why `jsign` choco deps are ignored

* tweak: move KeyLocker signing config to CI release Tauri config file

This prevents casual local builds from attempting to use a signing
command they really can't use, improving developer experience.

* tweak(ci/windows): do not waste time and signatures with MSIs

We aren't distributing these anyway. This should reduce the signing
operations required for building the app from 5 (one for the binary,
another for the MSI installer, two for WiX extension DLLs and one for
the NSIS installer) to 2.

* feat(ci): make Windows code signing toggleable, do not sign non-final builds

* chore(ci): tweak `sign-windows-binaries` input wording

* fix(ci): deal with usual Powershell syntax shenanigans

* fix(ci): work around more Powershell syntax shenanigans

Who thought it'd be a good idea to make a comma a synonymous of a space
for separating command line arguments? Why have to characters for the
same thing?

* perf(ci): do not run app build workflow on Labrinth changes

Labrinth is not related to the app at all, so this is just a waste of CI
minutes.

* ci(theseus): enable Windows code signing by default for manual triggers

These are expected to be not that common, so defaulting to what causes
the least human errors when it comes to publishing a release makes most
sense.
2025-06-26 17:43:20 +00:00
Prospector
2d8420131d Update changelog 2025-06-26 10:45:48 -07:00
Prospector
c793b68aed Add quick server button, dynamic price preview for custom server modal (#3815)
* Add quick server creation button, and dynamic pricing to custom server selection

* Remove test in compatibility card

* Lint + remove duplicate file

* Adjust z-index of popup

* $6 -> $5

* Dismiss prompt if the button is clicked

* Make "Create a server" disabled for now

* Use existing loaders type
2025-06-26 15:38:42 +00:00
Josiah Glosson
47af459f24 Migrate Java code to Gradle (#3833)
* Create get_resource_file macro to get an embedded resource

If the tauri feature is enabled, the resource will be loaded from Tauri resources.
If the tauri feature is disabled, the resource will be extracted to a temp directory.

* Wrap process execution to inject system properties through stdin

* Pass the time values as ISO 8601 datetimes

* Remove entirely internal modrinth.process.uuid

* Redo Java version checking somewhat and fix a few bugs with it

* Fix game launch with early access versions of Java

* Format Java code

* Use Gradle to build Java code

* Make Gradle build reproducible

* Fix constant rebuilds

* Get rid of unnecessary rebuilds

* Fix modrinth.profile.modified being the same as modrinth.profile.created

* Make javac use all lints and treat them as errors

* Force Gradle color output

* Add Java formatting config

* Make gradlew executable

* Revert to manually extracting class files

* Switch to using update resource macro

* fix: make `app-lib` build again

---------

Co-authored-by: Alejandro González <me@alegon.dev>
2025-06-26 14:56:35 +00:00
Josiah Glosson
f10e0f2bf1 Pass system properties into Minecraft (+ some launch code cleanup) (#3822)
* Create get_resource_file macro to get an embedded resource

If the tauri feature is enabled, the resource will be loaded from Tauri resources.
If the tauri feature is disabled, the resource will be extracted to a temp directory.

* Wrap process execution to inject system properties through stdin

* Pass the time values as ISO 8601 datetimes

* Remove entirely internal modrinth.process.uuid

* Redo Java version checking somewhat and fix a few bugs with it

* Fix game launch with early access versions of Java

* Format Java code

* Fix modrinth.profile.modified being the same as modrinth.profile.created

* Revert to manually extracting class files
2025-06-26 13:23:14 +00:00
Alejandro González
569d60cb57 tweak(labrinth): return proper error message for invalid password change flows (#3839) 2025-06-25 16:51:08 +00:00
Ken
74d36a6a2d Update config.toml (#3832)
Signed-off-by: Ken <131881470+Keniis0712@users.noreply.github.com>
2025-06-23 22:40:16 +00:00
Prospector
ced073d26c Add colors for new loaders, reduce utility redundancy (#3820)
* Add colors to some newer loaders

* Make loader formatting consistent everywhere, remove redundant utilities
2025-06-21 14:35:42 +00:00
Josiah Glosson
cc34e69524 Initial shared instances backend (#3800)
* Create base shared instance migration and initial routes

* Fix build

* Add version uploads

* Add permissions field for shared instance users

* Actually use permissions field

* Add "public" flag to shared instances that allow GETing them without authorization

* Add the ability to get and list shared instance versions

* Add the ability to delete shared instance versions

* Fix build after merge

* Secured file hosting (#3784)

* Remove Backblaze-specific file-hosting backend

* Added S3_USES_PATH_STYLE_BUCKETS

* Remove unused file_id parameter from delete_file_version

* Add support for separate public and private buckets in labrinth::file_hosting

* Rename delete_file_version to delete_file

* Add (untested) get_url_for_private_file

* Remove url field from shared instance routes

* Remove url field from shared instance routes

* Use private bucket for shared instance versions

* Make S3 environment variables fully separate between public and private buckets

* Change file host expiry for shared instances to 180 seconds

* Fix lint

* Merge shared instance migrations into a single migration

* Replace shared instance owners with Ghost instead of deleting the instance
2025-06-19 19:46:12 +00:00
IMB11
d4864deac5 fix: intercom bubble colliding with notifications (#3810) 2025-06-19 15:18:10 +00:00
IMB11
125207880d fix: state update race conditons (#3812) 2025-06-19 15:18:00 +00:00
IMB11
a8f17f40f5 fix: unclear file upload errors temp fix (#3811) 2025-06-19 15:17:50 +00:00
136 changed files with 3826 additions and 2382 deletions

View File

@@ -1,6 +1,6 @@
# Windows has stack overflows when calling from Tauri, so we increase the default stack size used by the compiler
[target.'cfg(windows)']
rustflags = ["-C", "link-args=/STACK:16777220"]
rustflags = ["-C", "link-args=/STACK:16777220", "--cfg", "tokio_unstable"]
[build]
rustflags = ["--cfg", "tokio_unstable"]

View File

@@ -14,5 +14,5 @@ max_line_length = 100
max_line_length = off
trim_trailing_whitespace = false
[*.rs]
indent_size = 4
[*.{rs,java,kts}]
indent_size = 4

View File

@@ -9,14 +9,18 @@ on:
- .github/workflows/theseus-release.yml
- 'apps/app/**'
- 'apps/app-frontend/**'
- 'apps/labrinth/src/common/**'
- 'apps/labrinth/Cargo.toml'
- 'packages/app-lib/**'
- 'packages/app-macros/**'
- 'packages/assets/**'
- 'packages/ui/**'
- 'packages/utils/**'
workflow_dispatch:
inputs:
sign-windows-binaries:
description: Sign Windows binaries
type: boolean
default: true
required: false
jobs:
build:
@@ -103,11 +107,21 @@ jobs:
sudo apt-get update
sudo apt-get install -y libwebkit2gtk-4.1-dev build-essential curl wget file libxdo-dev libssl-dev pkg-config libayatana-appindicator3-dev librsvg2-dev
- name: Install code signing client (Windows only)
if: startsWith(matrix.platform, 'windows')
run: choco install jsign --ignore-dependencies # GitHub runners come with a global Java installation already
- name: Install frontend dependencies
run: pnpm install
- name: Disable Windows code signing for non-final release builds
if: ${{ startsWith(matrix.platform, 'windows') && !startsWith(github.ref, 'refs/tags/v') && !inputs.sign-windows-binaries }}
run: |
jq 'del(.bundle.windows.signCommand)' apps/app/tauri-release.conf.json > apps/app/tauri-release.conf.json.new
Move-Item -Path apps/app/tauri-release.conf.json.new -Destination apps/app/tauri-release.conf.json -Force
- name: build app (macos)
run: pnpm --filter=@modrinth/app run tauri build --target universal-apple-darwin --config "tauri-release.conf.json"
run: pnpm --filter=@modrinth/app run tauri build --target universal-apple-darwin --config tauri-release.conf.json
if: startsWith(matrix.platform, 'macos')
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -121,15 +135,30 @@ jobs:
TAURI_SIGNING_PRIVATE_KEY: ${{ secrets.TAURI_PRIVATE_KEY }}
TAURI_SIGNING_PRIVATE_KEY_PASSWORD: ${{ secrets.TAURI_KEY_PASSWORD }}
- name: build app
run: pnpm --filter=@modrinth/app run tauri build --config "tauri-release.conf.json"
id: build_os
if: "!startsWith(matrix.platform, 'macos')"
- name: build app (Linux)
run: pnpm --filter=@modrinth/app run tauri build --config tauri-release.conf.json
if: startsWith(matrix.platform, 'ubuntu')
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
TAURI_SIGNING_PRIVATE_KEY: ${{ secrets.TAURI_PRIVATE_KEY }}
TAURI_SIGNING_PRIVATE_KEY_PASSWORD: ${{ secrets.TAURI_KEY_PASSWORD }}
- name: build app (Windows)
run: |
[System.Convert]::FromBase64String("$env:DIGICERT_ONE_SIGNER_CLIENT_CERTIFICATE_BASE64") | Set-Content -Path signer-client-cert.p12 -AsByteStream
$env:DIGICERT_ONE_SIGNER_CREDENTIALS = "$env:DIGICERT_ONE_SIGNER_API_KEY|$PWD\signer-client-cert.p12|$env:DIGICERT_ONE_SIGNER_CLIENT_CERTIFICATE_PASSWORD"
$env:JAVA_HOME = "$env:JAVA_HOME_11_X64"
pnpm --filter=@modrinth/app run tauri build --config tauri-release.conf.json --verbose --bundles 'nsis,updater'
Remove-Item -Path signer-client-cert.p12
if: startsWith(matrix.platform, 'windows')
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
TAURI_SIGNING_PRIVATE_KEY: ${{ secrets.TAURI_PRIVATE_KEY }}
TAURI_SIGNING_PRIVATE_KEY_PASSWORD: ${{ secrets.TAURI_KEY_PASSWORD }}
DIGICERT_ONE_SIGNER_API_KEY: ${{ secrets.DIGICERT_ONE_SIGNER_API_KEY }}
DIGICERT_ONE_SIGNER_CLIENT_CERTIFICATE_BASE64: ${{ secrets.DIGICERT_ONE_SIGNER_CLIENT_CERTIFICATE_BASE64 }}
DIGICERT_ONE_SIGNER_CLIENT_CERTIFICATE_PASSWORD: ${{ secrets.DIGICERT_ONE_SIGNER_CLIENT_CERTIFICATE_PASSWORD }}
- name: upload ${{ matrix.platform }}
uses: actions/upload-artifact@v4
with:

2
.idea/code.iml generated
View File

@@ -17,4 +17,4 @@
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
</module>
</module>

2
.idea/modules.xml generated
View File

@@ -5,4 +5,4 @@
<module fileurl="file://$PROJECT_DIR$/.idea/code.iml" filepath="$PROJECT_DIR$/.idea/code.iml" />
</modules>
</component>
</project>
</project>

1
Cargo.lock generated
View File

@@ -8889,6 +8889,7 @@ dependencies = [
"flate2",
"fs4",
"futures",
"hashlink",
"hickory-resolver",
"indicatif",
"notify",

View File

@@ -60,6 +60,7 @@ flate2 = "1.1.2"
fs4 = { version = "0.13.1", default-features = false }
futures = { version = "0.3.31", default-features = false }
futures-util = "0.3.31"
hashlink = "0.10.0"
hex = "0.4.3"
hickory-resolver = "0.25.2"
hmac = "0.12.1"

View File

@@ -127,7 +127,7 @@ async function handleJavaFileInput() {
const filePath = await open()
if (filePath) {
let result = await get_jre(filePath.path ?? filePath)
let result = await get_jre(filePath.path ?? filePath).catch(handleError)
if (!result) {
result = {
path: filePath.path ?? filePath,

View File

@@ -41,8 +41,8 @@ pub async fn jre_find_filtered_jres(
// Validates JRE at a given path
// Returns None if the path is not a valid JRE
#[tauri::command]
pub async fn jre_get_jre(path: PathBuf) -> Result<Option<JavaVersion>> {
jre::check_jre(path).await.map_err(|e| e.into())
pub async fn jre_get_jre(path: PathBuf) -> Result<JavaVersion> {
Ok(jre::check_jre(path).await?)
}
// Tests JRE of a certain version

View File

@@ -1,6 +1,24 @@
{
"bundle": {
"createUpdaterArtifacts": "v1Compatible"
"createUpdaterArtifacts": "v1Compatible",
"windows": {
"signCommand": {
"cmd": "jsign",
"args": [
"sign",
"--verbose",
"--storetype",
"DIGICERTONE",
"--keystore",
"https://clientauth.one.digicert.com",
"--storepass",
"env:DIGICERT_ONE_SIGNER_CREDENTIALS",
"--tsaurl",
"https://timestamp.sectigo.com,http://timestamp.digicert.com",
"%1"
]
}
}
},
"build": {
"features": ["updater"]

View File

@@ -14,9 +14,6 @@
"externalBin": [],
"icon": ["icons/128x128.png", "icons/128x128@2x.png", "icons/icon.icns", "icons/icon.ico"],
"windows": {
"certificateThumbprint": null,
"digestAlgorithm": "sha256",
"timestampUrl": "http://timestamp.digicert.com",
"nsis": {
"installMode": "perMachine",
"installerHooks": "./nsis/hooks.nsi"
@@ -30,7 +27,6 @@
"providerShortName": null,
"signingIdentity": null
},
"resources": [],
"shortDescription": "",
"linux": {
"deb": {

View File

@@ -85,11 +85,10 @@ During development, you might notice that changes made directly to entities in t
#### CDN options
`STORAGE_BACKEND`: Controls what storage backend is used. This can be either `local`, `backblaze`, or `s3`, but defaults to `local`
`STORAGE_BACKEND`: Controls what storage backend is used. This can be either `local` or `s3`, but defaults to `local`
The Backblaze and S3 configuration options are fairly self-explanatory in name, so here's simply their names:
`BACKBLAZE_KEY_ID`, `BACKBLAZE_KEY`, `BACKBLAZE_BUCKET_ID`
`S3_ACCESS_TOKEN`, `S3_SECRET`, `S3_URL`, `S3_REGION`, `S3_BUCKET_NAME`
The S3 configuration options are fairly self-explanatory in name, so here's simply their names:
`S3_ACCESS_TOKEN`, `S3_SECRET`, `S3_URL`, `S3_REGION`, `S3_PUBLIC_BUCKET_NAME`, `S3_PRIVATE_BUCKET_NAME`, `S3_USES_PATH_STYLE_BUCKETS`
#### Search, OAuth, and miscellaneous options

View File

@@ -1,5 +1,8 @@
<template>
<div class="vue-notification-group experimental-styles-within">
<div
class="vue-notification-group experimental-styles-within"
:class="{ 'intercom-present': isIntercomPresent }"
>
<transition-group name="notifs">
<div
v-for="(item, index) in notifications"
@@ -80,6 +83,8 @@ import {
} from "@modrinth/assets";
const notifications = useNotifications();
const isIntercomPresent = ref(false);
function stopTimer(notif) {
clearTimeout(notif.timer);
}
@@ -106,6 +111,27 @@ const createNotifText = (notif) => {
return text;
};
function checkIntercomPresence() {
isIntercomPresent.value = !!document.querySelector(".intercom-lightweight-app");
}
onMounted(() => {
checkIntercomPresence();
const observer = new MutationObserver(() => {
checkIntercomPresence();
});
observer.observe(document.body, {
childList: true,
subtree: true,
});
onBeforeUnmount(() => {
observer.disconnect();
});
});
function copyToClipboard(notif) {
const text = createNotifText(notif);
@@ -130,6 +156,10 @@ function copyToClipboard(notif) {
bottom: 0.75rem;
}
&.intercom-present {
bottom: 5rem;
}
.vue-notification-wrapper {
width: 100%;
overflow: hidden;

View File

@@ -57,7 +57,7 @@
<div class="table-cell">
<BoxIcon />
<span>{{
$formatProjectType(
formatProjectType(
$getProjectTypeForDisplay(
project.project_types?.[0] ?? "project",
project.loaders,
@@ -111,6 +111,7 @@
<script setup>
import { BoxIcon, SettingsIcon, TransferIcon, XIcon } from "@modrinth/assets";
import { Button, Modal, Checkbox, CopyCode, Avatar } from "@modrinth/ui";
import { formatProjectType } from "@modrinth/utils";
const modalOpen = ref(null);

View File

@@ -118,7 +118,7 @@ import {
ScaleIcon,
DropdownIcon,
} from "@modrinth/assets";
import { formatProjectType } from "~/plugins/shorthands.js";
import { formatProjectType } from "@modrinth/utils";
import { acceptTeamInvite, removeTeamMember } from "~/helpers/teams.js";
const props = defineProps({

View File

@@ -11,8 +11,8 @@
<div class="stacked">
<span class="title">{{ report.project.title }}</span>
<span>{{
$formatProjectType(
$getProjectTypeForUrl(report.project.project_type, report.project.loaders),
formatProjectType(
getProjectTypeForUrl(report.project.project_type, report.project.loaders),
)
}}</span>
</div>
@@ -53,8 +53,8 @@
<div class="stacked">
<span class="title">{{ report.project.title }}</span>
<span>{{
$formatProjectType(
$getProjectTypeForUrl(report.project.project_type, report.project.loaders),
formatProjectType(
getProjectTypeForUrl(report.project.project_type, report.project.loaders),
)
}}</span>
</div>
@@ -105,8 +105,10 @@
<script setup>
import { ReportIcon, UnknownIcon, VersionIcon } from "@modrinth/assets";
import { Avatar, Badge, CopyCode, useRelativeTime } from "@modrinth/ui";
import { formatProjectType } from "@modrinth/utils";
import { renderHighlightedString } from "~/helpers/highlight.js";
import ThreadSummary from "~/components/ui/thread/ThreadSummary.vue";
import { getProjectTypeForUrl } from "~/helpers/projects.js";
const formatRelativeTime = useRelativeTime();

View File

@@ -4,12 +4,14 @@
<span
v-for="category in categoriesFiltered"
:key="category.name"
v-html="category.icon + $formatCategory(category.name)"
v-html="category.icon + formatCategory(category.name)"
/>
</div>
</template>
<script>
import { formatCategory } from "@modrinth/utils";
export default {
props: {
categories: {
@@ -38,6 +40,7 @@ export default {
);
},
},
methods: { formatCategory },
};
</script>

View File

@@ -16,7 +16,7 @@ import {
} from "@modrinth/assets";
import { ButtonStyled, commonMessages, OverflowMenu, ProgressBar } from "@modrinth/ui";
import { defineMessages, useVIntl } from "@vintl/vintl";
import { ref } from "vue";
import { ref, computed } from "vue";
import type { Backup } from "@modrinth/utils";
const flags = useFeatureFlags();
@@ -52,9 +52,10 @@ const failedToCreate = computed(() => props.backup.interrupted);
const preparedDownloadStates = ["ready", "done"];
const inactiveStates = ["failed", "cancelled"];
const hasPreparedDownload = computed(() =>
preparedDownloadStates.includes(props.backup.task?.file?.state ?? ""),
);
const hasPreparedDownload = computed(() => {
const fileState = props.backup.task?.file?.state ?? "";
return preparedDownloadStates.includes(fileState);
});
const creating = computed(() => {
const task = props.backup.task?.create;
@@ -81,6 +82,10 @@ const restoring = computed(() => {
const initiatedPrepare = ref(false);
const preparingFile = computed(() => {
if (hasPreparedDownload.value) {
return false;
}
const task = props.backup.task?.file;
return (
(!task && initiatedPrepare.value) ||

View File

@@ -39,7 +39,7 @@
/>
<XCircleIcon
v-show="
item.status === 'error' ||
item.status.includes('error') ||
item.status === 'cancelled' ||
item.status === 'incorrect-type'
"
@@ -54,9 +54,14 @@
<template v-if="item.status === 'completed'">
<span>Done</span>
</template>
<template v-else-if="item.status === 'error'">
<template v-else-if="item.status === 'error-file-exists'">
<span class="text-red">Failed - File already exists</span>
</template>
<template v-else-if="item.status === 'error-generic'">
<span class="text-red"
>Failed - {{ item.error?.message || "An unexpected error occured." }}</span
>
</template>
<template v-else-if="item.status === 'incorrect-type'">
<span class="text-red">Failed - Incorrect file type</span>
</template>
@@ -104,9 +109,17 @@ import { FSModule } from "~/composables/servers/modules/fs.ts";
interface UploadItem {
file: File;
progress: number;
status: "pending" | "uploading" | "completed" | "error" | "cancelled" | "incorrect-type";
status:
| "pending"
| "uploading"
| "completed"
| "error-file-exists"
| "error-generic"
| "cancelled"
| "incorrect-type";
size: string;
uploader?: any;
error?: Error;
}
interface Props {
@@ -245,8 +258,18 @@ const uploadFile = async (file: File) => {
console.error("Error uploading file:", error);
const index = uploadQueue.value.findIndex((item) => item.file.name === file.name);
if (index !== -1 && uploadQueue.value[index].status !== "cancelled") {
uploadQueue.value[index].status =
error instanceof Error && error.message === badFileTypeMsg ? "incorrect-type" : "error";
const target = uploadQueue.value[index];
if (error instanceof Error) {
if (error.message === badFileTypeMsg) {
target.status = "incorrect-type";
} else if (target.progress === 100 && error.message.includes("401")) {
target.status = "error-file-exists";
} else {
target.status = "error-generic";
target.error = error;
}
}
}
setTimeout(async () => {

View File

@@ -31,6 +31,9 @@ export const DEFAULT_FEATURE_FLAGS = validateValues({
projectBackground: false,
searchBackground: false,
advancedDebugInfo: false,
showProjectPageDownloadModalServersPromo: true,
showProjectPageCreateServersTooltip: true,
showProjectPageQuickServerButton: false,
// advancedRendering: true,
// externalLinksNewTab: true,
// notUsingBlockers: false,

View File

@@ -1,4 +1,4 @@
import { formatBytes } from "~/plugins/shorthands.js";
import { formatBytes } from "@modrinth/utils";
export const fileIsValid = (file, validationOptions) => {
const { maxSize, alertOnInvalid } = validationOptions;

View File

@@ -452,6 +452,16 @@
{{ formatCategory(currentPlatform) }}.
</p>
</AutomaticAccordion>
<ServersPromo
v-if="flags.showProjectPageDownloadModalServersPromo"
:link="`/servers#plan`"
@close="
() => {
flags.showProjectPageDownloadModalServersPromo = false;
saveFeatureFlags();
}
"
/>
</div>
</template>
</NewModal>
@@ -495,6 +505,64 @@
</button>
</ButtonStyled>
</div>
<Tooltip
v-if="canCreateServerFrom && flags.showProjectPageQuickServerButton"
theme="dismissable-prompt"
:triggers="[]"
:shown="flags.showProjectPageCreateServersTooltip"
:auto-hide="false"
placement="bottom-start"
>
<ButtonStyled size="large" circular>
<nuxt-link
v-tooltip="'Create a server'"
:to="`/servers?project=${project.id}#plan`"
@click="
() => {
flags.showProjectPageCreateServersTooltip = false;
saveFeatureFlags();
}
"
>
<ServerPlusIcon aria-hidden="true" />
</nuxt-link>
</ButtonStyled>
<template #popper>
<div class="experimental-styles-within flex max-w-60 flex-col gap-1">
<div class="flex items-center justify-between gap-4">
<h3 class="m-0 flex items-center gap-2 text-base font-bold text-contrast">
Create a server
<TagItem
:style="{
'--_color': 'var(--color-brand)',
'--_bg-color': 'var(--color-brand-highlight)',
}"
>New</TagItem
>
</h3>
<ButtonStyled size="small" circular>
<button
v-tooltip="`Don't show again`"
@click="
() => {
flags.showProjectPageCreateServersTooltip = false;
saveFeatureFlags();
}
"
>
<XIcon aria-hidden="true" />
</button>
</ButtonStyled>
</div>
<p class="m-0 text-wrap text-sm font-medium leading-tight text-secondary">
Modrinth Servers is the easiest way to play with your friends without hassle!
</p>
<p class="m-0 text-wrap text-sm font-bold text-primary">
Starting at $5<span class="text-xs"> / month</span>
</p>
</div>
</template>
</Tooltip>
<ClientOnly>
<ButtonStyled
size="large"
@@ -850,12 +918,14 @@ import {
ReportIcon,
ScaleIcon,
SearchIcon,
ServerPlusIcon,
SettingsIcon,
TagsIcon,
UsersIcon,
VersionIcon,
WrenchIcon,
ModrinthIcon,
XIcon,
} from "@modrinth/assets";
import {
Avatar,
@@ -872,12 +942,22 @@ import {
ProjectSidebarLinks,
ProjectStatusBadge,
ScrollablePanel,
TagItem,
ServersPromo,
useRelativeTime,
} from "@modrinth/ui";
import VersionSummary from "@modrinth/ui/src/components/version/VersionSummary.vue";
import { formatCategory, isRejected, isStaff, isUnderReview, renderString } from "@modrinth/utils";
import {
formatCategory,
formatProjectType,
isRejected,
isStaff,
isUnderReview,
renderString,
} from "@modrinth/utils";
import { navigateTo } from "#app";
import dayjs from "dayjs";
import { Tooltip } from "floating-vue";
import Accordion from "~/components/ui/Accordion.vue";
import AdPlaceholder from "~/components/ui/AdPlaceholder.vue";
import AutomaticAccordion from "~/components/ui/AutomaticAccordion.vue";
@@ -891,6 +971,7 @@ import NavTabs from "~/components/ui/NavTabs.vue";
import ProjectMemberHeader from "~/components/ui/ProjectMemberHeader.vue";
import { userCollectProject } from "~/composables/user.js";
import { reportProject } from "~/utils/report-helpers.ts";
import { saveFeatureFlags } from "~/composables/featureFlags.ts";
const data = useNuxtApp();
const route = useNativeRoute();
@@ -1286,7 +1367,7 @@ featuredVersions.value.sort((a, b) => {
});
const projectTypeDisplay = computed(() =>
data.$formatProjectType(
formatProjectType(
data.$getProjectTypeForDisplay(project.value.project_type, project.value.loaders),
),
);
@@ -1304,6 +1385,10 @@ const description = computed(
} by ${members.value.find((x) => x.is_owner)?.user?.username || "a Creator"} on Modrinth`,
);
const canCreateServerFrom = computed(() => {
return project.value.project_type === "modpack" && project.value.server_side !== "unsupported";
});
if (!route.name.startsWith("type-id-settings")) {
useSeoMeta({
title: () => title.value,
@@ -1672,4 +1757,33 @@ const navLinks = computed(() => {
display: none;
}
}
.servers-popup {
box-shadow:
0 0 12px 1px rgba(0, 175, 92, 0.6),
var(--shadow-floating);
&::before {
width: 0;
height: 0;
border-left: 6px solid transparent;
border-right: 6px solid transparent;
border-bottom: 6px solid var(--color-button-bg);
content: " ";
position: absolute;
top: -7px;
left: 17px;
}
&::after {
width: 0;
height: 0;
border-left: 5px solid transparent;
border-right: 5px solid transparent;
border-bottom: 5px solid var(--color-raised-bg);
content: " ";
position: absolute;
top: -5px;
left: 18px;
}
}
</style>

View File

@@ -104,7 +104,7 @@
<span class="label__title">Client-side</span>
<span class="label__description">
Select based on if the
{{ $formatProjectType(project.project_type).toLowerCase() }} has functionality on the
{{ formatProjectType(project.project_type).toLowerCase() }} has functionality on the
client side. Just because a mod works in Singleplayer doesn't mean it has actual
client-side functionality.
</span>
@@ -128,7 +128,7 @@
<span class="label__title">Server-side</span>
<span class="label__description">
Select based on if the
{{ $formatProjectType(project.project_type).toLowerCase() }} has functionality on the
{{ formatProjectType(project.project_type).toLowerCase() }} has functionality on the
<strong>logical</strong> server. Remember that Singleplayer contains an integrated
server.
</span>
@@ -239,7 +239,7 @@
</template>
<script setup>
import { formatProjectStatus } from "@modrinth/utils";
import { formatProjectStatus, formatProjectType } from "@modrinth/utils";
import { UploadIcon, SaveIcon, TrashIcon, XIcon, IssuesIcon, CheckIcon } from "@modrinth/assets";
import { Multiselect } from "vue-multiselect";
import { ConfirmModal, Avatar } from "@modrinth/ui";

View File

@@ -8,7 +8,7 @@
</div>
<p>
Accurate tagging is important to help people find your
{{ $formatProjectType(project.project_type).toLowerCase() }}. Make sure to select all tags
{{ formatProjectType(project.project_type).toLowerCase() }}. Make sure to select all tags
that apply.
</p>
<p v-if="project.versions.length === 0" class="known-errors">
@@ -18,25 +18,25 @@
<template v-for="header in Object.keys(categoryLists)" :key="`categories-${header}`">
<div class="label">
<h4>
<span class="label__title">{{ $formatCategoryHeader(header) }}</span>
<span class="label__title">{{ formatCategoryHeader(header) }}</span>
</h4>
<span class="label__description">
<template v-if="header === 'categories'">
Select all categories that reflect the themes or function of your
{{ $formatProjectType(project.project_type).toLowerCase() }}.
{{ formatProjectType(project.project_type).toLowerCase() }}.
</template>
<template v-else-if="header === 'features'">
Select all of the features that your
{{ $formatProjectType(project.project_type).toLowerCase() }} makes use of.
{{ formatProjectType(project.project_type).toLowerCase() }} makes use of.
</template>
<template v-else-if="header === 'resolutions'">
Select the resolution(s) of textures in your
{{ $formatProjectType(project.project_type).toLowerCase() }}.
{{ formatProjectType(project.project_type).toLowerCase() }}.
</template>
<template v-else-if="header === 'performance impact'">
Select the realistic performance impact of your
{{ $formatProjectType(project.project_type).toLowerCase() }}. Select multiple if the
{{ $formatProjectType(project.project_type).toLowerCase() }} is configurable to
{{ formatProjectType(project.project_type).toLowerCase() }}. Select multiple if the
{{ formatProjectType(project.project_type).toLowerCase() }} is configurable to
different levels of performance impact.
</template>
</span>
@@ -46,7 +46,7 @@
v-for="category in categoryLists[header]"
:key="`category-${header}-${category.name}`"
:model-value="selectedTags.includes(category)"
:description="$formatCategory(category.name)"
:description="formatCategory(category.name)"
class="category-selector"
@update:model-value="toggleCategory(category)"
>
@@ -57,7 +57,7 @@
class="icon"
v-html="category.icon"
/>
<span aria-hidden="true"> {{ $formatCategory(category.name) }}</span>
<span aria-hidden="true"> {{ formatCategory(category.name) }}</span>
</div>
</Checkbox>
</div>
@@ -80,7 +80,7 @@
:key="`featured-category-${category.name}`"
class="category-selector"
:model-value="featuredTags.includes(category)"
:description="$formatCategory(category.name)"
:description="formatCategory(category.name)"
:disabled="featuredTags.length >= 3 && !featuredTags.includes(category)"
@update:model-value="toggleFeaturedCategory(category)"
>
@@ -91,7 +91,7 @@
class="icon"
v-html="category.icon"
/>
<span aria-hidden="true"> {{ $formatCategory(category.name) }}</span>
<span aria-hidden="true"> {{ formatCategory(category.name) }}</span>
</div>
</Checkbox>
</div>
@@ -114,6 +114,7 @@
<script>
import { StarIcon, SaveIcon } from "@modrinth/assets";
import { formatCategory, formatCategoryHeader, formatProjectType } from "@modrinth/utils";
import Checkbox from "~/components/ui/Checkbox.vue";
export default defineNuxtComponent({
@@ -222,6 +223,9 @@ export default defineNuxtComponent({
},
},
methods: {
formatProjectType,
formatCategoryHeader,
formatCategory,
toggleCategory(category) {
if (this.selectedTags.includes(category)) {
this.selectedTags = this.selectedTags.filter((x) => x !== category);

View File

@@ -144,7 +144,7 @@
<div v-else class="input-group">
<ButtonStyled v-if="primaryFile" color="brand">
<a
v-tooltip="primaryFile.filename + ' (' + $formatBytes(primaryFile.size) + ')'"
v-tooltip="primaryFile.filename + ' (' + formatBytes(primaryFile.size) + ')'"
:href="primaryFile.url"
@click="emit('onDownload')"
>
@@ -320,7 +320,7 @@
<FileIcon aria-hidden="true" />
<span class="filename">
<strong>{{ replaceFile.name }}</strong>
<span class="file-size">({{ $formatBytes(replaceFile.size) }})</span>
<span class="file-size">({{ formatBytes(replaceFile.size) }})</span>
</span>
<FileInput
class="iconified-button raised-button"
@@ -345,7 +345,7 @@
<FileIcon aria-hidden="true" />
<span class="filename">
<strong>{{ file.filename }}</strong>
<span class="file-size">({{ $formatBytes(file.size) }})</span>
<span class="file-size">({{ formatBytes(file.size) }})</span>
<span v-if="primaryFile.hashes.sha1 === file.hashes.sha1" class="file-type">
Primary
</span>
@@ -412,7 +412,7 @@
<FileIcon aria-hidden="true" />
<span class="filename">
<strong>{{ file.name }}</strong>
<span class="file-size">({{ $formatBytes(file.size) }})</span>
<span class="file-size">({{ formatBytes(file.size) }})</span>
</span>
<multiselect
v-if="version.loaders.some((x) => tags.loaderData.dataPackLoaders.includes(x))"
@@ -533,7 +533,7 @@
)
.map((it) => it.name)
"
:custom-label="(value) => $formatCategory(value)"
:custom-label="formatCategory"
:loading="tags.loaders.length === 0"
:multiple="true"
:searchable="true"
@@ -657,6 +657,7 @@ import {
ChevronRightIcon,
} from "@modrinth/assets";
import { Multiselect } from "vue-multiselect";
import { formatBytes, formatCategory } from "@modrinth/utils";
import { acceptFileFromProjectType } from "~/helpers/fileUtils.js";
import { inferVersionInfo } from "~/helpers/infer.js";
import { createDataPackVersion } from "~/helpers/package.js";
@@ -962,6 +963,8 @@ export default defineNuxtComponent({
},
},
methods: {
formatBytes,
formatCategory,
async onImageUpload(file) {
const response = await useImageUpload(file, { context: "version" });

View File

@@ -26,7 +26,7 @@
v-if="notifTypes.length > 1"
v-model="selectedType"
:items="notifTypes"
:format-label="(x) => (x === 'all' ? 'All' : $formatProjectType(x).replace('_', ' ') + 's')"
:format-label="(x) => (x === 'all' ? 'All' : formatProjectType(x).replace('_', ' ') + 's')"
:capitalize="false"
/>
<p v-if="pending">Loading notifications...</p>
@@ -58,6 +58,7 @@
<script setup>
import { Button, Pagination, Chips } from "@modrinth/ui";
import { HistoryIcon, CheckCheckIcon } from "@modrinth/assets";
import { formatProjectType } from "@modrinth/utils";
import {
fetchExtraNotificationData,
groupNotifications,

View File

@@ -239,7 +239,7 @@
<div>
<nuxt-link
tabindex="-1"
:to="`/${$getProjectTypeForUrl(project.project_type, project.loaders)}/${
:to="`/${getProjectTypeForUrl(project.project_type, project.loaders)}/${
project.slug ? project.slug : project.id
}`"
>
@@ -261,7 +261,7 @@
<nuxt-link
class="hover-link wrap-as-needed"
:to="`/${$getProjectTypeForUrl(project.project_type, project.loaders)}/${
:to="`/${getProjectTypeForUrl(project.project_type, project.loaders)}/${
project.slug ? project.slug : project.id
}`"
>
@@ -275,7 +275,7 @@
</div>
<div>
{{ $formatProjectType($getProjectTypeForUrl(project.project_type, project.loaders)) }}
{{ formatProjectType(getProjectTypeForUrl(project.project_type, project.loaders)) }}
</div>
<div>
@@ -285,7 +285,7 @@
<div>
<ButtonStyled circular>
<nuxt-link
:to="`/${$getProjectTypeForUrl(project.project_type, project.loaders)}/${
:to="`/${getProjectTypeForUrl(project.project_type, project.loaders)}/${
project.slug ? project.slug : project.id
}/settings`"
>
@@ -321,9 +321,11 @@ import {
ProjectStatusBadge,
commonMessages,
} from "@modrinth/ui";
import { formatProjectType } from "@modrinth/utils";
import Modal from "~/components/ui/Modal.vue";
import ModalCreation from "~/components/ui/ModalCreation.vue";
import { getProjectTypeForUrl } from "~/helpers/projects.js";
export default defineNuxtComponent({
components: {
@@ -395,6 +397,8 @@ export default defineNuxtComponent({
this.DELETE_PROJECT = 1 << 7;
},
methods: {
getProjectTypeForUrl,
formatProjectType,
updateDescending() {
this.descending = !this.descending;
this.projects = this.updateSort(this.projects, this.sortBy, this.descending);

View File

@@ -76,7 +76,7 @@
</span>
<template v-if="payout.method">
<span>⋅</span>
<span>{{ $formatWallet(payout.method) }} ({{ payout.method_address }})</span>
<span>{{ formatWallet(payout.method) }} ({{ payout.method_address }})</span>
</template>
</div>
</div>
@@ -95,7 +95,7 @@
</template>
<script setup>
import { XIcon, PayPalIcon, UnknownIcon } from "@modrinth/assets";
import { capitalizeString } from "@modrinth/utils";
import { capitalizeString, formatWallet } from "@modrinth/utils";
import { Badge, Breadcrumbs, DropdownSelect } from "@modrinth/ui";
import dayjs from "dayjs";
import TremendousIcon from "~/assets/images/external/tremendous.svg?component";

View File

@@ -139,8 +139,8 @@
<template v-if="knownErrors.length === 0 && amount">
<Checkbox v-if="fees > 0" v-model="agreedFees" description="Consent to fee">
I acknowledge that an estimated
{{ $formatMoney(fees) }} will be deducted from the amount I receive to cover
{{ $formatWallet(selectedMethod.type) }} processing fees.
{{ formatMoney(fees) }} will be deducted from the amount I receive to cover
{{ formatWallet(selectedMethod.type) }} processing fees.
</Checkbox>
<Checkbox v-model="agreedTransfer" description="Confirm transfer">
<template v-if="selectedMethod.type === 'tremendous'">
@@ -149,7 +149,7 @@
</template>
<template v-else>
I confirm that I am initiating a transfer to the following
{{ $formatWallet(selectedMethod.type) }} account: {{ withdrawAccount }}
{{ formatWallet(selectedMethod.type) }} account: {{ withdrawAccount }}
</template>
</Checkbox>
<Checkbox v-model="agreedTerms" class="rewards-checkbox">
@@ -198,6 +198,7 @@ import {
} from "@modrinth/assets";
import { Chips, Checkbox, Breadcrumbs } from "@modrinth/ui";
import { all } from "iso-3166-1";
import { formatMoney, formatWallet } from "@modrinth/utils";
import VenmoIcon from "~/assets/images/external/venmo.svg?component";
const auth = await useAuth();
@@ -360,9 +361,7 @@ async function withdraw() {
text:
selectedMethod.value.type === "tremendous"
? "An email has been sent to your account with further instructions on how to redeem your payout!"
: `Payment has been sent to your ${data.$formatWallet(
selectedMethod.value.type,
)} account!`,
: `Payment has been sent to your ${formatWallet(selectedMethod.value.type)} account!`,
type: "success",
});
} catch (err) {

View File

@@ -32,7 +32,7 @@
</div>
</template>
<script setup>
import { formatNumber } from "~/plugins/shorthands.js";
import { formatNumber } from "@modrinth/utils";
useHead({
title: "Staff overview - Modrinth",

View File

@@ -5,7 +5,7 @@
<Chips
v-model="projectType"
:items="projectTypes"
:format-label="(x) => (x === 'all' ? 'All' : $formatProjectType(x) + 's')"
:format-label="(x) => (x === 'all' ? 'All' : formatProjectType(x) + 's')"
/>
<button v-if="oldestFirst" class="iconified-button push-right" @click="oldestFirst = false">
<SortDescendingIcon />
@@ -56,7 +56,7 @@
<Avatar :src="project.icon_url" size="xs" no-shadow raised />
<span class="stacked">
<span class="title">{{ project.name }}</span>
<span>{{ $formatProjectType(project.inferred_project_type) }}</span>
<span>{{ formatProjectType(project.inferred_project_type) }}</span>
</span>
</nuxt-link>
</div>
@@ -114,7 +114,7 @@ import {
IssuesIcon,
ScaleIcon,
} from "@modrinth/assets";
import { formatProjectType } from "~/plugins/shorthands.js";
import { formatProjectType } from "@modrinth/utils";
import { asEncodedJsonArray, fetchSegmented } from "~/utils/fetch-helpers.ts";
useHead({

View File

@@ -272,7 +272,7 @@
<div class="table-cell">
<BoxIcon />
<span>{{
$formatProjectType(
formatProjectType(
$getProjectTypeForDisplay(project.project_types[0] ?? "project", project.loaders),
)
}}</span>
@@ -313,6 +313,7 @@ import {
} from "@modrinth/assets";
import { Button, Modal, Avatar, CopyCode, Badge, Checkbox, commonMessages } from "@modrinth/ui";
import { formatProjectType } from "@modrinth/utils";
import ModalCreation from "~/components/ui/ModalCreation.vue";
import OrganizationProjectTransferModal from "~/components/ui/OrganizationProjectTransferModal.vue";

View File

@@ -500,6 +500,7 @@
<section
id="plan"
pyro-hash="plan"
class="relative mt-24 flex flex-col bg-[radial-gradient(65%_50%_at_50%_-10%,var(--color-brand-highlight)_0%,var(--color-accent-contrast)_100%)] px-3 pt-24 md:mt-48 md:pt-48"
>
<div class="faded-brand-line absolute left-0 top-0 h-[1px] w-full"></div>
@@ -603,7 +604,9 @@
<RightArrowIcon class="shrink-0" />
</button>
</ButtonStyled>
<p class="m-0 text-sm">Starting at $3/GB RAM</p>
<p v-if="lowestPrice" class="m-0 text-sm">
Starting at {{ formatPrice(locale, lowestPrice, selectedCurrency, true) }} / month
</p>
</div>
</div>
</div>
@@ -622,20 +625,34 @@ import {
VersionIcon,
ServerIcon,
} from "@modrinth/assets";
import { computed } from "vue";
import { monthsInInterval } from "@modrinth/ui/src/utils/billing.ts";
import { formatPrice } from "@modrinth/utils";
import { useVIntl } from "@vintl/vintl";
import { products } from "~/generated/state.json";
import { useServersFetch } from "~/composables/servers/servers-fetch.ts";
import LoaderIcon from "~/components/ui/servers/icons/LoaderIcon.vue";
import ServerPlanSelector from "~/components/ui/servers/marketing/ServerPlanSelector.vue";
import OptionGroup from "~/components/ui/OptionGroup.vue";
const { locale } = useVIntl();
const billingPeriods = ref(["monthly", "quarterly"]);
const billingPeriod = ref(billingPeriods.value.includes("quarterly") ? "quarterly" : "monthly");
const pyroProducts = products.filter((p) => p.metadata.type === "pyro");
const pyroProducts = products
.filter((p) => p.metadata.type === "pyro")
.sort((a, b) => a.metadata.ram - b.metadata.ram);
const pyroPlanProducts = pyroProducts.filter(
(p) => p.metadata.ram === 4096 || p.metadata.ram === 6144 || p.metadata.ram === 8192,
);
pyroPlanProducts.sort((a, b) => a.metadata.ram - b.metadata.ram);
const lowestPrice = computed(() => {
const amount = pyroProducts[0]?.prices?.find(
(price) => price.currency_code === selectedCurrency.value,
)?.prices?.intervals?.[billingPeriod.value];
return amount ? amount / monthsInInterval[billingPeriod.value] : undefined;
});
const title = "Modrinth Servers";
const description =
@@ -799,6 +816,8 @@ async function fetchPaymentData() {
}
}
const selectedProjectId = ref();
const route = useRoute();
const isAtCapacity = computed(
() => isSmallAtCapacity.value && isMediumAtCapacity.value && isLargeAtCapacity.value,
@@ -817,7 +836,12 @@ const scrollToFaq = () => {
}
};
onMounted(scrollToFaq);
onMounted(() => {
scrollToFaq();
if (route.query?.project) {
selectedProjectId.value = route.query?.project;
}
});
watch(() => route.hash, scrollToFaq);
@@ -876,9 +900,9 @@ const selectProduct = async (product) => {
await nextTick();
if (product === "custom") {
purchaseModal.value?.show(billingPeriod.value);
purchaseModal.value?.show(billingPeriod.value, undefined, selectedProjectId.value);
} else {
purchaseModal.value?.show(billingPeriod.value, selectedProduct.value);
purchaseModal.value?.show(billingPeriod.value, selectedProduct.value, selectedProjectId.value);
}
};

View File

@@ -200,9 +200,9 @@
<script setup lang="ts">
import { CodeIcon, RadioButtonCheckedIcon, RadioButtonIcon } from "@modrinth/assets";
import { Button, ThemeSelector } from "@modrinth/ui";
import { formatProjectType } from "@modrinth/utils";
import MessageBanner from "~/components/ui/MessageBanner.vue";
import type { DisplayLocation } from "~/plugins/cosmetics";
import { formatProjectType } from "~/plugins/shorthands.js";
import { isDarkTheme, type Theme } from "~/plugins/theme/index.ts";
useHead({

View File

@@ -10,6 +10,10 @@ export default defineNuxtPlugin((nuxtApp) => {
instantMove: true,
distance: 8,
},
"dismissable-prompt": {
$extend: "dropdown",
placement: "bottom-start",
},
},
});
});

View File

@@ -13,11 +13,6 @@ export default defineNuxtPlugin((nuxtApp) => {
return cosmeticsStore.externalLinksNewTab ? "_blank" : "";
});
nuxtApp.provide("formatBytes", formatBytes);
nuxtApp.provide("formatWallet", formatWallet);
nuxtApp.provide("formatProjectType", formatProjectType);
nuxtApp.provide("formatCategory", formatCategory);
nuxtApp.provide("formatCategoryHeader", formatCategoryHeader);
/*
Only use on the complete list of versions for a project, partial lists will generate
@@ -156,89 +151,10 @@ export const formatMoney = (number, abbreviate = false) => {
}
};
export const formatBytes = (bytes, decimals = 2) => {
if (bytes === 0) {
return "0 Bytes";
}
const k = 1024;
const dm = decimals < 0 ? 0 : decimals;
const sizes = ["Bytes", "KiB", "MiB", "GiB"];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + " " + sizes[i];
};
export const capitalizeString = (name) => {
return name ? name.charAt(0).toUpperCase() + name.slice(1) : name;
};
export const formatWallet = (name) => {
if (name === "paypal") {
return "PayPal";
}
return capitalizeString(name);
};
export const formatProjectType = (name) => {
if (name === "resourcepack") {
return "Resource Pack";
} else if (name === "datapack") {
return "Data Pack";
}
return capitalizeString(name);
};
export const formatCategory = (name) => {
if (name === "modloader") {
return "Risugami's ModLoader";
} else if (name === "bungeecord") {
return "BungeeCord";
} else if (name === "liteloader") {
return "LiteLoader";
} else if (name === "neoforge") {
return "NeoForge";
} else if (name === "game-mechanics") {
return "Game Mechanics";
} else if (name === "worldgen") {
return "World Generation";
} else if (name === "core-shaders") {
return "Core Shaders";
} else if (name === "gui") {
return "GUI";
} else if (name === "8x-") {
return "8x or lower";
} else if (name === "512x+") {
return "512x or higher";
} else if (name === "kitchen-sink") {
return "Kitchen Sink";
} else if (name === "path-tracing") {
return "Path Tracing";
} else if (name === "pbr") {
return "PBR";
} else if (name === "datapack") {
return "Data Pack";
} else if (name === "colored-lighting") {
return "Colored Lighting";
} else if (name === "optifine") {
return "OptiFine";
} else if (name === "mrpack") {
return "Modpack";
} else if (name === "minecraft") {
return "Resource Pack";
} else if (name === "vanilla") {
return "Vanilla Shader";
}
return capitalizeString(name);
};
export const formatCategoryHeader = (name) => {
return capitalizeString(name);
};
export const formatVersions = (tag, versionArray) => {
const allVersions = tag.value.gameVersions.slice().reverse();
const allReleases = allVersions.filter((x) => x.version_type === "release");

File diff suppressed because it is too large Load Diff

View File

@@ -1,84 +0,0 @@
<!doctype html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>Modrinth App Ad</title>
<script type="module" src="//js.rev.iq" data-domain="modrinth.com"></script>
<style>
* {
margin: 0;
padding: 0;
overflow: hidden;
cursor: pointer;
}
.ads-container {
width: 100vw;
height: 100vh;
overflow: hidden;
position: relative;
}
#plus-link {
width: 100vw;
height: 100vh;
position: absolute;
top: 0;
left: 0;
z-index: 0;
}
#modrinth-rail-1 {
border-radius: 1rem;
position: absolute;
left: 0;
top: 0;
z-index: 2;
}
</style>
</head>
<body>
<div class="ads-container">
<div id="plus-link"></div>
<div id="modrinth-rail-1" data-ad="app-rail" />
</div>
<script>
// window.addEventListener(
// "message",
// (event) => {
// if (event.data.modrinthAdClick && window.__TAURI_INTERNALS__) {
// window.__TAURI_INTERNALS__.invoke("plugin:ads|record_ads_click", {});
// }
//
// if (event.data.modrinthOpenUrl && window.__TAURI_INTERNALS__) {
// window.__TAURI_INTERNALS__.invoke("plugin:ads|open_link", {
// path: event.data.modrinthOpenUrl,
// origin: event.origin,
// });
// }
// },
// false,
// );
window.addEventListener("mousewheel", (event) => {
if (window.__TAURI_INTERNALS__) {
window.__TAURI_INTERNALS__.invoke("plugin:ads|scroll_ads_window", {
scroll: event.deltaY,
});
}
});
document.addEventListener("contextmenu", (event) => event.preventDefault());
const plusLink = document.getElementById("plus-link");
plusLink.addEventListener("click", function () {
window.__TAURI_INTERNALS__.invoke("plugin:ads|record_ads_click", {});
window.__TAURI_INTERNALS__.invoke("plugin:ads|open_link", {
path: "https://modrinth.com/plus",
origin: "https://modrinth.com",
});
});
</script>
</body>
</html>

View File

@@ -28,15 +28,19 @@ CLOUDFLARE_INTEGRATION=false
STORAGE_BACKEND=local
MOCK_FILE_PATH=/tmp/modrinth
BACKBLAZE_KEY_ID=none
BACKBLAZE_KEY=none
BACKBLAZE_BUCKET_ID=none
S3_PUBLIC_BUCKET_NAME=none
S3_PUBLIC_USES_PATH_STYLE_BUCKET=false
S3_PUBLIC_REGION=none
S3_PUBLIC_URL=none
S3_PUBLIC_ACCESS_TOKEN=none
S3_PUBLIC_SECRET=none
S3_ACCESS_TOKEN=none
S3_SECRET=none
S3_URL=none
S3_REGION=none
S3_BUCKET_NAME=none
S3_PRIVATE_BUCKET_NAME=none
S3_PRIVATE_USES_PATH_STYLE_BUCKET=false
S3_PRIVATE_REGION=none
S3_PRIVATE_URL=none
S3_PRIVATE_ACCESS_TOKEN=none
S3_PRIVATE_SECRET=none
# 1 hour
LOCAL_INDEX_INTERVAL=3600

View File

@@ -0,0 +1,16 @@
{
"db_name": "PostgreSQL",
"query": "\n INSERT INTO shared_instance_users (user_id, shared_instance_id, permissions)\n VALUES ($1, $2, $3)\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Int8",
"Int8",
"Int8"
]
},
"nullable": []
},
"hash": "09ebec1a568edf1959f20b33d8ba2b8edb55d93ada8f2243448865163f555d8d"
}

View File

@@ -0,0 +1,46 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT id, title, owner_id, public, current_version_id\n FROM shared_instances\n WHERE id = $1\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id",
"type_info": "Int8"
},
{
"ordinal": 1,
"name": "title",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "owner_id",
"type_info": "Int8"
},
{
"ordinal": 3,
"name": "public",
"type_info": "Bool"
},
{
"ordinal": 4,
"name": "current_version_id",
"type_info": "Int8"
}
],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": [
false,
false,
false,
false,
true
]
},
"hash": "1ebe19b7b4f10039065967a0b1ca4bb38acc54e4ea5de020fffef7457000fa6e"
}

View File

@@ -0,0 +1,46 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT id, shared_instance_id, size, sha512, created\n FROM shared_instance_versions\n WHERE shared_instance_id = $1\n ORDER BY created DESC\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id",
"type_info": "Int8"
},
{
"ordinal": 1,
"name": "shared_instance_id",
"type_info": "Int8"
},
{
"ordinal": 2,
"name": "size",
"type_info": "Int8"
},
{
"ordinal": 3,
"name": "sha512",
"type_info": "Bytea"
},
{
"ordinal": 4,
"name": "created",
"type_info": "Timestamptz"
}
],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": [
false,
false,
false,
false,
false
]
},
"hash": "265c4d6f33714c8a5cf3137c429e2b57e917e9507942d65f40c1b733209cabf0"
}

View File

@@ -0,0 +1,14 @@
{
"db_name": "PostgreSQL",
"query": "\n DELETE FROM shared_instance_versions\n WHERE id = $1\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": []
},
"hash": "47130ef29ce5914528e5424fe516a9158a3ea08f8720f6df5b4902cd8094d3bb"
}

View File

@@ -0,0 +1,15 @@
{
"db_name": "PostgreSQL",
"query": "UPDATE shared_instances SET current_version_id = $1 WHERE id = $2",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Int8",
"Int8"
]
},
"nullable": []
},
"hash": "47ec9f179f1c52213bd32b37621ab13ae43d180b8c86cb2a6fab0253dd4eba55"
}

View File

@@ -0,0 +1,15 @@
{
"db_name": "PostgreSQL",
"query": "\n UPDATE shared_instances\n SET public = $1\n WHERE id = $2\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Bool",
"Int8"
]
},
"nullable": []
},
"hash": "6b166d129b0ee028898620054a58fa4c3641eb2221e522bf50abad4f5e977599"
}

View File

@@ -0,0 +1,17 @@
{
"db_name": "PostgreSQL",
"query": "\n INSERT INTO shared_instances (id, title, owner_id, current_version_id)\n VALUES ($1, $2, $3, $4)\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Int8",
"Varchar",
"Int8",
"Int8"
]
},
"nullable": []
},
"hash": "6f72c853e139f23322fe6f1f02e4e07e5ae80b5dfca6dc041a03c0c7a30a5cf1"
}

View File

@@ -0,0 +1,15 @@
{
"db_name": "PostgreSQL",
"query": "\n UPDATE shared_instances\n SET owner_id = $1\n WHERE owner_id = $2\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Int8",
"Int8"
]
},
"nullable": []
},
"hash": "72ae0e8debd06067894a2f7bea279446dd964da4efa49c5464cebde57860f741"
}

View File

@@ -0,0 +1,22 @@
{
"db_name": "PostgreSQL",
"query": "SELECT EXISTS(SELECT 1 FROM shared_instance_versions WHERE id=$1)",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "exists",
"type_info": "Bool"
}
],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": [
null
]
},
"hash": "7c445073f61e30723416a9690aa9d227d95f2a8f2eb9852833e14c723903988b"
}

View File

@@ -0,0 +1,46 @@
{
"db_name": "PostgreSQL",
"query": "\n -- See https://github.com/launchbadge/sqlx/issues/1266 for why we need all the \"as\"\n SELECT\n id as \"id!\",\n title as \"title!\",\n public as \"public!\",\n owner_id as \"owner_id!\",\n current_version_id\n FROM shared_instances\n WHERE owner_id = $1\n UNION\n SELECT\n id as \"id!\",\n title as \"title!\",\n public as \"public!\",\n owner_id as \"owner_id!\",\n current_version_id\n FROM shared_instances\n JOIN shared_instance_users ON id = shared_instance_id\n WHERE user_id = $1\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id!",
"type_info": "Int8"
},
{
"ordinal": 1,
"name": "title!",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "public!",
"type_info": "Bool"
},
{
"ordinal": 3,
"name": "owner_id!",
"type_info": "Int8"
},
{
"ordinal": 4,
"name": "current_version_id",
"type_info": "Int8"
}
],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": [
null,
null,
null,
null,
null
]
},
"hash": "9c6e18cb19251e54b3b96446ab88d84842152b82c9a0032d1db587d7099b8550"
}

View File

@@ -0,0 +1,15 @@
{
"db_name": "PostgreSQL",
"query": "\n UPDATE shared_instances\n SET title = $1\n WHERE id = $2\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Varchar",
"Int8"
]
},
"nullable": []
},
"hash": "9ccaf8ea52b1b6f0880d34cdb4a9405e28c265bef6121b457c4f39cacf00683f"
}

View File

@@ -0,0 +1,34 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT shared_instance_id, user_id, permissions\n FROM shared_instance_users\n WHERE shared_instance_id = ANY($1)\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "shared_instance_id",
"type_info": "Int8"
},
{
"ordinal": 1,
"name": "user_id",
"type_info": "Int8"
},
{
"ordinal": 2,
"name": "permissions",
"type_info": "Int8"
}
],
"parameters": {
"Left": [
"Int8Array"
]
},
"nullable": [
false,
false,
false
]
},
"hash": "aec58041cf5e5e68501652336581b8c709645ef29f3b5fb6e8e07fc212b36798"
}

View File

@@ -0,0 +1,46 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT id, shared_instance_id, size, sha512, created\n FROM shared_instance_versions\n WHERE id = $1\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id",
"type_info": "Int8"
},
{
"ordinal": 1,
"name": "shared_instance_id",
"type_info": "Int8"
},
{
"ordinal": 2,
"name": "size",
"type_info": "Int8"
},
{
"ordinal": 3,
"name": "sha512",
"type_info": "Bytea"
},
{
"ordinal": 4,
"name": "created",
"type_info": "Timestamptz"
}
],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": [
false,
false,
false,
false,
false
]
},
"hash": "b93253bbc35b24974d13bc8ee0447be2a18275f33f8991d910f693fbcc1ff731"
}

View File

@@ -0,0 +1,23 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT permissions\n FROM shared_instance_users\n WHERE shared_instance_id = $1 AND user_id = $2\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "permissions",
"type_info": "Int8"
}
],
"parameters": {
"Left": [
"Int8",
"Int8"
]
},
"nullable": [
false
]
},
"hash": "c3869a595693757ccf81085d0c8eb2231578aff18c93d02ead97c3c07f0b27ea"
}

View File

@@ -0,0 +1,14 @@
{
"db_name": "PostgreSQL",
"query": "\n DELETE FROM shared_instances\n WHERE id = $1\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": []
},
"hash": "cef730c02bb67b0536d35e5aaca0bd34c3893e8b55bbd126a988137ec7bf1ff9"
}

View File

@@ -0,0 +1,22 @@
{
"db_name": "PostgreSQL",
"query": "SELECT EXISTS(SELECT 1 FROM shared_instances WHERE id=$1)",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "exists",
"type_info": "Bool"
}
],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": [
null
]
},
"hash": "d8558a8039ade3b383db4f0e095e6826f46c27ab3a21520e9e169fd1491521c4"
}

View File

@@ -0,0 +1,18 @@
{
"db_name": "PostgreSQL",
"query": "\n INSERT INTO shared_instance_versions (id, shared_instance_id, size, sha512, created)\n VALUES ($1, $2, $3, $4, $5)\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Int8",
"Int8",
"Int8",
"Bytea",
"Timestamptz"
]
},
"nullable": []
},
"hash": "d8a1d710f86b3df4d99c2d2ec26ec405531e4270be85087122245991ec88473e"
}

View File

@@ -0,0 +1,14 @@
{
"db_name": "PostgreSQL",
"query": "\n UPDATE shared_instances\n SET current_version_id = (\n SELECT id FROM shared_instance_versions\n WHERE shared_instance_id = $1\n ORDER BY created DESC\n LIMIT 1\n )\n WHERE id = $1\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Int8"
]
},
"nullable": []
},
"hash": "f6388b5026e25191840d1a157a9ed48aaedab5db381f4efc389b852d9020a0e6"
}

View File

@@ -0,0 +1,43 @@
CREATE TABLE shared_instances (
id BIGINT PRIMARY KEY,
title VARCHAR(255) NOT NULL,
owner_id BIGINT NOT NULL REFERENCES users,
current_version_id BIGINT NULL,
public BOOLEAN NOT NULL DEFAULT FALSE
);
CREATE INDEX shared_instances_owner_id ON shared_instances(owner_id);
CREATE TABLE shared_instance_users (
user_id BIGINT NOT NULL REFERENCES users ON DELETE CASCADE,
shared_instance_id BIGINT NOT NULL REFERENCES shared_instances ON DELETE CASCADE,
permissions BIGINT NOT NULL DEFAULT 0,
PRIMARY KEY (user_id, shared_instance_id)
);
CREATE TABLE shared_instance_invited_users (
id BIGINT PRIMARY KEY,
shared_instance_id BIGINT NOT NULL REFERENCES shared_instances ON DELETE CASCADE,
invited_user_id BIGINT NULL REFERENCES users ON DELETE CASCADE
);
CREATE INDEX shared_instance_invited_users_shared_instance_id ON shared_instance_invited_users(shared_instance_id);
CREATE INDEX shared_instance_invited_users_invited_user_id ON shared_instance_invited_users(invited_user_id);
CREATE TABLE shared_instance_invite_links (
id BIGINT PRIMARY KEY,
shared_instance_id BIGINT NOT NULL REFERENCES shared_instances ON DELETE CASCADE,
expiration timestamptz NULL,
remaining_uses BIGINT CHECK ( remaining_uses >= 0 ) NULL
);
CREATE INDEX shared_instance_invite_links_shared_instance_id ON shared_instance_invite_links(shared_instance_id);
CREATE TABLE shared_instance_versions (
id BIGINT PRIMARY KEY,
shared_instance_id BIGINT NOT NULL REFERENCES shared_instances ON DELETE CASCADE,
size BIGINT NOT NULL,
sha512 bytea NOT NULL,
created timestamptz NOT NULL DEFAULT CURRENT_TIMESTAMP
);
ALTER TABLE shared_instances
ADD FOREIGN KEY (current_version_id) REFERENCES shared_instance_versions(id) ON DELETE SET NULL;

View File

@@ -9,7 +9,7 @@ use ariadne::ids::DecodingError;
#[error("{}", .error_type)]
pub struct OAuthError {
#[source]
pub error_type: OAuthErrorType,
pub error_type: Box<OAuthErrorType>,
pub state: Option<String>,
pub valid_redirect_uri: Option<ValidatedRedirectUri>,
@@ -32,7 +32,7 @@ impl OAuthError {
/// See: IETF RFC 6749 4.1.2.1 (https://datatracker.ietf.org/doc/html/rfc6749#section-4.1.2.1)
pub fn error(error_type: impl Into<OAuthErrorType>) -> Self {
Self {
error_type: error_type.into(),
error_type: Box::new(error_type.into()),
valid_redirect_uri: None,
state: None,
}
@@ -48,7 +48,7 @@ impl OAuthError {
valid_redirect_uri: &ValidatedRedirectUri,
) -> Self {
Self {
error_type: err.into(),
error_type: Box::new(err.into()),
state: state.clone(),
valid_redirect_uri: Some(valid_redirect_uri.clone()),
}
@@ -57,7 +57,7 @@ impl OAuthError {
impl actix_web::ResponseError for OAuthError {
fn status_code(&self) -> StatusCode {
match self.error_type {
match *self.error_type {
OAuthErrorType::AuthenticationError(_)
| OAuthErrorType::FailedScopeParse(_)
| OAuthErrorType::ScopesTooBroad

View File

@@ -101,7 +101,7 @@ mod tests {
);
assert!(validated.is_err_and(|e| matches!(
e.error_type,
*e.error_type,
OAuthErrorType::RedirectUriNotConfigured(_)
)));
}

View File

@@ -10,6 +10,40 @@ use actix_web::HttpRequest;
use actix_web::http::header::{AUTHORIZATION, HeaderValue};
use chrono::Utc;
pub async fn get_maybe_user_from_headers<'a, E>(
req: &HttpRequest,
executor: E,
redis: &RedisPool,
session_queue: &AuthQueue,
required_scopes: Scopes,
) -> Result<Option<(Scopes, User)>, AuthenticationError>
where
E: sqlx::Executor<'a, Database = sqlx::Postgres> + Copy,
{
if !req.headers().contains_key(AUTHORIZATION) {
return Ok(None);
}
// Fetch DB user record and minos user from headers
let Some((scopes, db_user)) = get_user_record_from_bearer_token(
req,
None,
executor,
redis,
session_queue,
)
.await?
else {
return Ok(None);
};
if !scopes.contains(required_scopes) {
return Ok(None);
}
Ok(Some((scopes, User::from_full(db_user))))
}
pub async fn get_user_from_headers<'a, E>(
req: &HttpRequest,
executor: E,

View File

@@ -3,8 +3,9 @@ use crate::models::ids::{
ChargeId, CollectionId, FileId, ImageId, NotificationId,
OAuthAccessTokenId, OAuthClientAuthorizationId, OAuthClientId,
OAuthRedirectUriId, OrganizationId, PatId, PayoutId, ProductId,
ProductPriceId, ProjectId, ReportId, SessionId, TeamId, TeamMemberId,
ThreadId, ThreadMessageId, UserSubscriptionId, VersionId,
ProductPriceId, ProjectId, ReportId, SessionId, SharedInstanceId,
SharedInstanceVersionId, TeamId, TeamMemberId, ThreadId, ThreadMessageId,
UserSubscriptionId, VersionId,
};
use ariadne::ids::base62_impl::to_base62;
use ariadne::ids::{UserId, random_base62_rng, random_base62_rng_range};
@@ -88,39 +89,50 @@ macro_rules! generate_bulk_ids {
};
}
macro_rules! impl_db_id_interface {
($id_struct:ident, $db_id_struct:ident, $(, generator: $generator_function:ident @ $db_table:expr, $(bulk_generator: $bulk_generator_function:ident,)?)?) => {
#[derive(Copy, Clone, Debug, Type, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[sqlx(transparent)]
pub struct $db_id_struct(pub i64);
impl From<$id_struct> for $db_id_struct {
fn from(id: $id_struct) -> Self {
Self(id.0 as i64)
}
}
impl From<$db_id_struct> for $id_struct {
fn from(id: $db_id_struct) -> Self {
Self(id.0 as u64)
}
}
$(
generate_ids!(
$generator_function,
$db_id_struct,
"SELECT EXISTS(SELECT 1 FROM " + $db_table + " WHERE id=$1)"
);
$(
generate_bulk_ids!(
$bulk_generator_function,
$db_id_struct,
"SELECT EXISTS(SELECT 1 FROM " + $db_table + " WHERE id = ANY($1))"
);
)?
)?
};
}
macro_rules! db_id_interface {
($id_struct:ident $(, generator: $generator_function:ident @ $db_table:expr, $(bulk_generator: $bulk_generator_function:ident,)?)?) => {
paste! {
#[derive(Copy, Clone, Debug, Type, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[sqlx(transparent)]
pub struct [< DB $id_struct >](pub i64);
impl From<$id_struct> for [< DB $id_struct >] {
fn from(id: $id_struct) -> Self {
Self(id.0 as i64)
}
}
impl From<[< DB $id_struct >]> for $id_struct {
fn from(id: [< DB $id_struct >]) -> Self {
Self(id.0 as u64)
}
}
$(
generate_ids!(
$generator_function,
[< DB $id_struct >],
"SELECT EXISTS(SELECT 1 FROM " + $db_table + " WHERE id=$1)"
);
$(
generate_bulk_ids!(
$bulk_generator_function,
[< DB $id_struct >],
"SELECT EXISTS(SELECT 1 FROM " + $db_table + " WHERE id = ANY($1))"
);
)?
)?
impl_db_id_interface!(
$id_struct,
[< DB $id_struct >],
$(, generator: $generator_function @ $db_table, $(bulk_generator: $bulk_generator_function,)?)?
);
}
};
}
@@ -212,6 +224,14 @@ db_id_interface!(
SessionId,
generator: generate_session_id @ "sessions",
);
db_id_interface!(
SharedInstanceId,
generator: generate_shared_instance_id @ "shared_instances",
);
db_id_interface!(
SharedInstanceVersionId,
generator: generate_shared_instance_version_id @ "shared_instance_versions",
);
db_id_interface!(
TeamId,
generator: generate_team_id @ "teams",

View File

@@ -20,6 +20,7 @@ pub mod product_item;
pub mod project_item;
pub mod report_item;
pub mod session_item;
pub mod shared_instance_item;
pub mod team_item;
pub mod thread_item;
pub mod user_item;

View File

@@ -0,0 +1,335 @@
use crate::database::models::{
DBSharedInstanceId, DBSharedInstanceVersionId, DBUserId,
};
use crate::database::redis::RedisPool;
use crate::models::shared_instances::SharedInstanceUserPermissions;
use chrono::{DateTime, Utc};
use dashmap::DashMap;
use futures_util::TryStreamExt;
use serde::{Deserialize, Serialize};
//region shared_instances
pub struct DBSharedInstance {
pub id: DBSharedInstanceId,
pub title: String,
pub owner_id: DBUserId,
pub public: bool,
pub current_version_id: Option<DBSharedInstanceVersionId>,
}
struct SharedInstanceQueryResult {
id: i64,
title: String,
owner_id: i64,
public: bool,
current_version_id: Option<i64>,
}
impl From<SharedInstanceQueryResult> for DBSharedInstance {
fn from(val: SharedInstanceQueryResult) -> Self {
DBSharedInstance {
id: DBSharedInstanceId(val.id),
title: val.title,
owner_id: DBUserId(val.owner_id),
public: val.public,
current_version_id: val
.current_version_id
.map(DBSharedInstanceVersionId),
}
}
}
impl DBSharedInstance {
pub async fn insert(
&self,
transaction: &mut sqlx::Transaction<'_, sqlx::Postgres>,
) -> Result<(), sqlx::Error> {
sqlx::query!(
"
INSERT INTO shared_instances (id, title, owner_id, current_version_id)
VALUES ($1, $2, $3, $4)
",
self.id as DBSharedInstanceId,
self.title,
self.owner_id as DBUserId,
self.current_version_id.map(|x| x.0),
)
.execute(&mut **transaction)
.await?;
Ok(())
}
pub async fn get(
id: DBSharedInstanceId,
exec: impl sqlx::Executor<'_, Database = sqlx::Postgres>,
) -> Result<Option<Self>, sqlx::Error> {
let result = sqlx::query_as!(
SharedInstanceQueryResult,
"
SELECT id, title, owner_id, public, current_version_id
FROM shared_instances
WHERE id = $1
",
id.0,
)
.fetch_optional(exec)
.await?;
Ok(result.map(Into::into))
}
pub async fn list_for_user(
user: DBUserId,
exec: impl sqlx::Executor<'_, Database = sqlx::Postgres>,
) -> Result<Vec<Self>, sqlx::Error> {
let results = sqlx::query_as!(
SharedInstanceQueryResult,
r#"
-- See https://github.com/launchbadge/sqlx/issues/1266 for why we need all the "as"
SELECT
id as "id!",
title as "title!",
public as "public!",
owner_id as "owner_id!",
current_version_id
FROM shared_instances
WHERE owner_id = $1
UNION
SELECT
id as "id!",
title as "title!",
public as "public!",
owner_id as "owner_id!",
current_version_id
FROM shared_instances
JOIN shared_instance_users ON id = shared_instance_id
WHERE user_id = $1
"#,
user.0,
)
.fetch_all(exec)
.await?;
Ok(results.into_iter().map(Into::into).collect())
}
}
//endregion
//region shared_instance_users
const USERS_NAMESPACE: &str = "shared_instance_users";
#[derive(Deserialize, Serialize, Clone, Debug)]
pub struct DBSharedInstanceUser {
pub user_id: DBUserId,
pub shared_instance_id: DBSharedInstanceId,
pub permissions: SharedInstanceUserPermissions,
}
impl DBSharedInstanceUser {
pub async fn insert(
&self,
transaction: &mut sqlx::Transaction<'_, sqlx::Postgres>,
) -> Result<(), sqlx::Error> {
sqlx::query!(
"
INSERT INTO shared_instance_users (user_id, shared_instance_id, permissions)
VALUES ($1, $2, $3)
",
self.user_id as DBUserId,
self.shared_instance_id as DBSharedInstanceId,
self.permissions.bits() as i64,
)
.execute(&mut **transaction)
.await?;
Ok(())
}
pub async fn get_user_permissions(
instance_id: DBSharedInstanceId,
user_id: DBUserId,
exec: impl sqlx::Executor<'_, Database = sqlx::Postgres>,
) -> Result<Option<SharedInstanceUserPermissions>, super::DatabaseError>
{
let permissions = sqlx::query!(
"
SELECT permissions
FROM shared_instance_users
WHERE shared_instance_id = $1 AND user_id = $2
",
instance_id as DBSharedInstanceId,
user_id as DBUserId,
)
.fetch_optional(exec)
.await?
.map(|x| {
SharedInstanceUserPermissions::from_bits(x.permissions as u64)
.unwrap_or(SharedInstanceUserPermissions::empty())
});
Ok(permissions)
}
pub async fn get_from_instance(
instance_id: DBSharedInstanceId,
exec: impl sqlx::Executor<'_, Database = sqlx::Postgres>,
redis: &RedisPool,
) -> Result<Vec<DBSharedInstanceUser>, super::DatabaseError> {
Self::get_from_instance_many(&[instance_id], exec, redis).await
}
pub async fn get_from_instance_many(
instance_ids: &[DBSharedInstanceId],
exec: impl sqlx::Executor<'_, Database = sqlx::Postgres>,
redis: &RedisPool,
) -> Result<Vec<DBSharedInstanceUser>, super::DatabaseError> {
if instance_ids.is_empty() {
return Ok(vec![]);
}
let users = redis
.get_cached_keys(
USERS_NAMESPACE,
&instance_ids.iter().map(|id| id.0).collect::<Vec<_>>(),
async |user_ids| {
let users = sqlx::query!(
"
SELECT shared_instance_id, user_id, permissions
FROM shared_instance_users
WHERE shared_instance_id = ANY($1)
",
&user_ids
)
.fetch(exec)
.try_fold(DashMap::new(), |acc: DashMap<_, Vec<_>>, m| {
acc.entry(m.shared_instance_id).or_default().push(
DBSharedInstanceUser {
user_id: DBUserId(m.user_id),
shared_instance_id: DBSharedInstanceId(
m.shared_instance_id,
),
permissions:
SharedInstanceUserPermissions::from_bits(
m.permissions as u64,
)
.unwrap_or(
SharedInstanceUserPermissions::empty(),
),
},
);
async move { Ok(acc) }
})
.await?;
Ok(users)
},
)
.await?;
Ok(users.into_iter().flatten().collect())
}
pub async fn clear_cache(
instance_id: DBSharedInstanceId,
redis: &RedisPool,
) -> Result<(), super::DatabaseError> {
let mut redis = redis.connect().await?;
redis.delete(USERS_NAMESPACE, instance_id.0).await?;
Ok(())
}
}
//endregion
//region shared_instance_versions
pub struct DBSharedInstanceVersion {
pub id: DBSharedInstanceVersionId,
pub shared_instance_id: DBSharedInstanceId,
pub size: u64,
pub sha512: Vec<u8>,
pub created: DateTime<Utc>,
}
struct SharedInstanceVersionQueryResult {
id: i64,
shared_instance_id: i64,
size: i64,
sha512: Vec<u8>,
created: DateTime<Utc>,
}
impl From<SharedInstanceVersionQueryResult> for DBSharedInstanceVersion {
fn from(val: SharedInstanceVersionQueryResult) -> Self {
DBSharedInstanceVersion {
id: DBSharedInstanceVersionId(val.id),
shared_instance_id: DBSharedInstanceId(val.shared_instance_id),
size: val.size as u64,
sha512: val.sha512,
created: val.created,
}
}
}
impl DBSharedInstanceVersion {
pub async fn insert(
&self,
transaction: &mut sqlx::Transaction<'_, sqlx::Postgres>,
) -> Result<(), sqlx::Error> {
sqlx::query!(
"
INSERT INTO shared_instance_versions (id, shared_instance_id, size, sha512, created)
VALUES ($1, $2, $3, $4, $5)
",
self.id as DBSharedInstanceVersionId,
self.shared_instance_id as DBSharedInstanceId,
self.size as i64,
self.sha512,
self.created,
)
.execute(&mut **transaction)
.await?;
Ok(())
}
pub async fn get(
id: DBSharedInstanceVersionId,
exec: impl sqlx::Executor<'_, Database = sqlx::Postgres>,
) -> Result<Option<Self>, sqlx::Error> {
let result = sqlx::query_as!(
SharedInstanceVersionQueryResult,
"
SELECT id, shared_instance_id, size, sha512, created
FROM shared_instance_versions
WHERE id = $1
",
id as DBSharedInstanceVersionId,
)
.fetch_optional(exec)
.await?;
Ok(result.map(Into::into))
}
pub async fn get_for_instance(
instance_id: DBSharedInstanceId,
exec: impl sqlx::Executor<'_, Database = sqlx::Postgres>,
) -> Result<Vec<Self>, sqlx::Error> {
let results = sqlx::query_as!(
SharedInstanceVersionQueryResult,
"
SELECT id, shared_instance_id, size, sha512, created
FROM shared_instance_versions
WHERE shared_instance_id = $1
ORDER BY created DESC
",
instance_id as DBSharedInstanceId,
)
.fetch_all(exec)
.await?;
Ok(results.into_iter().map(Into::into).collect())
}
}
//endregion

View File

@@ -511,6 +511,18 @@ impl DBUser {
.execute(&mut **transaction)
.await?;
sqlx::query!(
"
UPDATE shared_instances
SET owner_id = $1
WHERE owner_id = $2
",
deleted_user as DBUserId,
id as DBUserId,
)
.execute(&mut **transaction)
.await?;
use futures::TryStreamExt;
let notifications: Vec<i64> = sqlx::query!(
"

View File

@@ -1,108 +0,0 @@
use super::{DeleteFileData, FileHost, FileHostingError, UploadFileData};
use async_trait::async_trait;
use bytes::Bytes;
use reqwest::Response;
use serde::Deserialize;
use sha2::Digest;
mod authorization;
mod delete;
mod upload;
pub struct BackblazeHost {
upload_url_data: authorization::UploadUrlData,
authorization_data: authorization::AuthorizationData,
}
impl BackblazeHost {
pub async fn new(key_id: &str, key: &str, bucket_id: &str) -> Self {
let authorization_data =
authorization::authorize_account(key_id, key).await.unwrap();
let upload_url_data =
authorization::get_upload_url(&authorization_data, bucket_id)
.await
.unwrap();
BackblazeHost {
upload_url_data,
authorization_data,
}
}
}
#[async_trait]
impl FileHost for BackblazeHost {
async fn upload_file(
&self,
content_type: &str,
file_name: &str,
file_bytes: Bytes,
) -> Result<UploadFileData, FileHostingError> {
let content_sha512 = format!("{:x}", sha2::Sha512::digest(&file_bytes));
let upload_data = upload::upload_file(
&self.upload_url_data,
content_type,
file_name,
file_bytes,
)
.await?;
Ok(UploadFileData {
file_id: upload_data.file_id,
file_name: upload_data.file_name,
content_length: upload_data.content_length,
content_sha512,
content_sha1: upload_data.content_sha1,
content_md5: upload_data.content_md5,
content_type: upload_data.content_type,
upload_timestamp: upload_data.upload_timestamp,
})
}
/*
async fn upload_file_streaming(
&self,
content_type: &str,
file_name: &str,
stream: reqwest::Body
) -> Result<UploadFileData, FileHostingError> {
use futures::stream::StreamExt;
let mut data = Vec::new();
while let Some(chunk) = stream.next().await {
data.extend_from_slice(&chunk.map_err(|e| FileHostingError::Other(e))?);
}
self.upload_file(content_type, file_name, data).await
}
*/
async fn delete_file_version(
&self,
file_id: &str,
file_name: &str,
) -> Result<DeleteFileData, FileHostingError> {
let delete_data = delete::delete_file_version(
&self.authorization_data,
file_id,
file_name,
)
.await?;
Ok(DeleteFileData {
file_id: delete_data.file_id,
file_name: delete_data.file_name,
})
}
}
pub async fn process_response<T>(
response: Response,
) -> Result<T, FileHostingError>
where
T: for<'de> Deserialize<'de>,
{
if response.status().is_success() {
Ok(response.json().await?)
} else {
Err(FileHostingError::BackblazeError(response.json().await?))
}
}

View File

@@ -1,81 +0,0 @@
use crate::file_hosting::FileHostingError;
use base64::Engine;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct AuthorizationPermissions {
bucket_id: Option<String>,
bucket_name: Option<String>,
capabilities: Vec<String>,
name_prefix: Option<String>,
}
#[derive(Serialize, Deserialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct AuthorizationData {
pub absolute_minimum_part_size: i32,
pub account_id: String,
pub allowed: AuthorizationPermissions,
pub api_url: String,
pub authorization_token: String,
pub download_url: String,
pub recommended_part_size: i32,
}
#[derive(Serialize, Deserialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct UploadUrlData {
pub bucket_id: String,
pub upload_url: String,
pub authorization_token: String,
}
pub async fn authorize_account(
key_id: &str,
application_key: &str,
) -> Result<AuthorizationData, FileHostingError> {
let combined_key = format!("{key_id}:{application_key}");
let formatted_key = format!(
"Basic {}",
base64::engine::general_purpose::STANDARD.encode(combined_key)
);
let response = reqwest::Client::new()
.get("https://api.backblazeb2.com/b2api/v2/b2_authorize_account")
.header(reqwest::header::CONTENT_TYPE, "application/json")
.header(reqwest::header::AUTHORIZATION, formatted_key)
.send()
.await?;
super::process_response(response).await
}
pub async fn get_upload_url(
authorization_data: &AuthorizationData,
bucket_id: &str,
) -> Result<UploadUrlData, FileHostingError> {
let response = reqwest::Client::new()
.post(
format!(
"{}/b2api/v2/b2_get_upload_url",
authorization_data.api_url
)
.to_string(),
)
.header(reqwest::header::CONTENT_TYPE, "application/json")
.header(
reqwest::header::AUTHORIZATION,
&authorization_data.authorization_token,
)
.body(
serde_json::json!({
"bucketId": bucket_id,
})
.to_string(),
)
.send()
.await?;
super::process_response(response).await
}

View File

@@ -1,38 +0,0 @@
use super::authorization::AuthorizationData;
use crate::file_hosting::FileHostingError;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct DeleteFileData {
pub file_id: String,
pub file_name: String,
}
pub async fn delete_file_version(
authorization_data: &AuthorizationData,
file_id: &str,
file_name: &str,
) -> Result<DeleteFileData, FileHostingError> {
let response = reqwest::Client::new()
.post(format!(
"{}/b2api/v2/b2_delete_file_version",
authorization_data.api_url
))
.header(reqwest::header::CONTENT_TYPE, "application/json")
.header(
reqwest::header::AUTHORIZATION,
&authorization_data.authorization_token,
)
.body(
serde_json::json!({
"fileName": file_name,
"fileId": file_id
})
.to_string(),
)
.send()
.await?;
super::process_response(response).await
}

View File

@@ -1,47 +0,0 @@
use super::authorization::UploadUrlData;
use crate::file_hosting::FileHostingError;
use bytes::Bytes;
use hex::ToHex;
use serde::{Deserialize, Serialize};
use sha1::Digest;
#[derive(Serialize, Deserialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct UploadFileData {
pub file_id: String,
pub file_name: String,
pub account_id: String,
pub bucket_id: String,
pub content_length: u32,
pub content_sha1: String,
pub content_md5: Option<String>,
pub content_type: String,
pub upload_timestamp: u64,
}
//Content Types found here: https://www.backblaze.com/b2/docs/content-types.html
pub async fn upload_file(
url_data: &UploadUrlData,
content_type: &str,
file_name: &str,
file_bytes: Bytes,
) -> Result<UploadFileData, FileHostingError> {
let response = reqwest::Client::new()
.post(&url_data.upload_url)
.header(
reqwest::header::AUTHORIZATION,
&url_data.authorization_token,
)
.header("X-Bz-File-Name", file_name)
.header(reqwest::header::CONTENT_TYPE, content_type)
.header(reqwest::header::CONTENT_LENGTH, file_bytes.len())
.header(
"X-Bz-Content-Sha1",
sha1::Sha1::digest(&file_bytes).encode_hex::<String>(),
)
.body(file_bytes)
.send()
.await?;
super::process_response(response).await
}

View File

@@ -1,9 +1,13 @@
use super::{DeleteFileData, FileHost, FileHostingError, UploadFileData};
use super::{
DeleteFileData, FileHost, FileHostPublicity, FileHostingError,
UploadFileData,
};
use async_trait::async_trait;
use bytes::Bytes;
use chrono::Utc;
use hex::ToHex;
use sha2::Digest;
use std::path::PathBuf;
#[derive(Default)]
pub struct MockHost(());
@@ -20,11 +24,10 @@ impl FileHost for MockHost {
&self,
content_type: &str,
file_name: &str,
file_publicity: FileHostPublicity,
file_bytes: Bytes,
) -> Result<UploadFileData, FileHostingError> {
let path =
std::path::Path::new(&dotenvy::var("MOCK_FILE_PATH").unwrap())
.join(file_name.replace("../", ""));
let path = get_file_path(file_name, file_publicity);
std::fs::create_dir_all(
path.parent().ok_or(FileHostingError::InvalidFilename)?,
)?;
@@ -33,8 +36,8 @@ impl FileHost for MockHost {
std::fs::write(path, &*file_bytes)?;
Ok(UploadFileData {
file_id: String::from("MOCK_FILE_ID"),
file_name: file_name.to_string(),
file_publicity,
content_length: file_bytes.len() as u32,
content_sha512,
content_sha1,
@@ -44,20 +47,40 @@ impl FileHost for MockHost {
})
}
async fn delete_file_version(
async fn get_url_for_private_file(
&self,
file_id: &str,
file_name: &str,
_expiry_secs: u32,
) -> Result<String, FileHostingError> {
let cdn_url = dotenvy::var("CDN_URL").unwrap();
Ok(format!("{cdn_url}/private/{file_name}"))
}
async fn delete_file(
&self,
file_name: &str,
file_publicity: FileHostPublicity,
) -> Result<DeleteFileData, FileHostingError> {
let path =
std::path::Path::new(&dotenvy::var("MOCK_FILE_PATH").unwrap())
.join(file_name.replace("../", ""));
let path = get_file_path(file_name, file_publicity);
if path.exists() {
std::fs::remove_file(path)?;
}
Ok(DeleteFileData {
file_id: file_id.to_string(),
file_name: file_name.to_string(),
})
}
}
fn get_file_path(
file_name: &str,
file_publicity: FileHostPublicity,
) -> PathBuf {
let mut path = PathBuf::from(dotenvy::var("MOCK_FILE_PATH").unwrap());
if matches!(file_publicity, FileHostPublicity::Private) {
path.push("private");
}
path.push(file_name.replace("../", ""));
path
}

View File

@@ -1,23 +1,17 @@
use async_trait::async_trait;
use thiserror::Error;
mod backblaze;
mod mock;
mod s3_host;
pub use backblaze::BackblazeHost;
use bytes::Bytes;
pub use mock::MockHost;
pub use s3_host::S3Host;
pub use s3_host::{S3BucketConfig, S3Host};
#[derive(Error, Debug)]
pub enum FileHostingError {
#[error("Error while accessing the data from backblaze")]
HttpError(#[from] reqwest::Error),
#[error("Backblaze error: {0}")]
BackblazeError(serde_json::Value),
#[error("S3 error: {0}")]
S3Error(String),
#[error("S3 error when {0}: {1}")]
S3Error(&'static str, s3::error::S3Error),
#[error("File system error in file hosting: {0}")]
FileSystemError(#[from] std::io::Error),
#[error("Invalid Filename")]
@@ -26,8 +20,8 @@ pub enum FileHostingError {
#[derive(Debug, Clone)]
pub struct UploadFileData {
pub file_id: String,
pub file_name: String,
pub file_publicity: FileHostPublicity,
pub content_length: u32,
pub content_sha512: String,
pub content_sha1: String,
@@ -38,22 +32,34 @@ pub struct UploadFileData {
#[derive(Debug, Clone)]
pub struct DeleteFileData {
pub file_id: String,
pub file_name: String,
}
#[derive(Debug, Copy, Clone)]
pub enum FileHostPublicity {
Public,
Private,
}
#[async_trait]
pub trait FileHost {
async fn upload_file(
&self,
content_type: &str,
file_name: &str,
file_publicity: FileHostPublicity,
file_bytes: Bytes,
) -> Result<UploadFileData, FileHostingError>;
async fn delete_file_version(
async fn get_url_for_private_file(
&self,
file_id: &str,
file_name: &str,
expiry_secs: u32,
) -> Result<String, FileHostingError>;
async fn delete_file(
&self,
file_name: &str,
file_publicity: FileHostPublicity,
) -> Result<DeleteFileData, FileHostingError>;
}

View File

@@ -1,5 +1,6 @@
use crate::file_hosting::{
DeleteFileData, FileHost, FileHostingError, UploadFileData,
DeleteFileData, FileHost, FileHostPublicity, FileHostingError,
UploadFileData,
};
use async_trait::async_trait;
use bytes::Bytes;
@@ -10,50 +11,70 @@ use s3::creds::Credentials;
use s3::region::Region;
use sha2::Digest;
pub struct S3BucketConfig {
pub name: String,
pub uses_path_style: bool,
pub region: String,
pub url: String,
pub access_token: String,
pub secret: String,
}
pub struct S3Host {
bucket: Bucket,
public_bucket: Bucket,
private_bucket: Bucket,
}
impl S3Host {
pub fn new(
bucket_name: &str,
bucket_region: &str,
url: &str,
access_token: &str,
secret: &str,
public_bucket: S3BucketConfig,
private_bucket: S3BucketConfig,
) -> Result<S3Host, FileHostingError> {
let bucket = Bucket::new(
bucket_name,
if bucket_region == "r2" {
Region::R2 {
account_id: url.to_string(),
}
} else {
Region::Custom {
region: bucket_region.to_string(),
endpoint: url.to_string(),
}
},
Credentials::new(
Some(access_token),
Some(secret),
None,
None,
None,
)
.map_err(|_| {
FileHostingError::S3Error(
"Error while creating credentials".to_string(),
let create_bucket =
|config: S3BucketConfig| -> Result<_, FileHostingError> {
let mut bucket = Bucket::new(
"",
if config.region == "r2" {
Region::R2 {
account_id: config.url,
}
} else {
Region::Custom {
region: config.region,
endpoint: config.url,
}
},
Credentials {
access_key: Some(config.access_token),
secret_key: Some(config.secret),
..Credentials::anonymous().unwrap()
},
)
})?,
)
.map_err(|_| {
FileHostingError::S3Error(
"Error while creating Bucket instance".to_string(),
)
})?;
.map_err(|e| {
FileHostingError::S3Error("creating Bucket instance", e)
})?;
Ok(S3Host { bucket: *bucket })
bucket.name = config.name;
if config.uses_path_style {
bucket.set_path_style();
} else {
bucket.set_subdomain_style();
}
Ok(bucket)
};
Ok(S3Host {
public_bucket: *create_bucket(public_bucket)?,
private_bucket: *create_bucket(private_bucket)?,
})
}
fn get_bucket(&self, publicity: FileHostPublicity) -> &Bucket {
match publicity {
FileHostPublicity::Public => &self.public_bucket,
FileHostPublicity::Private => &self.private_bucket,
}
}
}
@@ -63,27 +84,24 @@ impl FileHost for S3Host {
&self,
content_type: &str,
file_name: &str,
file_publicity: FileHostPublicity,
file_bytes: Bytes,
) -> Result<UploadFileData, FileHostingError> {
let content_sha1 = sha1::Sha1::digest(&file_bytes).encode_hex();
let content_sha512 = format!("{:x}", sha2::Sha512::digest(&file_bytes));
self.bucket
self.get_bucket(file_publicity)
.put_object_with_content_type(
format!("/{file_name}"),
&file_bytes,
content_type,
)
.await
.map_err(|err| {
FileHostingError::S3Error(format!(
"Error while uploading file {file_name} to S3: {err}"
))
})?;
.map_err(|e| FileHostingError::S3Error("uploading file", e))?;
Ok(UploadFileData {
file_id: file_name.to_string(),
file_name: file_name.to_string(),
file_publicity,
content_length: file_bytes.len() as u32,
content_sha512,
content_sha1,
@@ -93,22 +111,32 @@ impl FileHost for S3Host {
})
}
async fn delete_file_version(
async fn get_url_for_private_file(
&self,
file_id: &str,
file_name: &str,
expiry_secs: u32,
) -> Result<String, FileHostingError> {
let url = self
.private_bucket
.presign_get(format!("/{file_name}"), expiry_secs, None)
.await
.map_err(|e| {
FileHostingError::S3Error("generating presigned URL", e)
})?;
Ok(url)
}
async fn delete_file(
&self,
file_name: &str,
file_publicity: FileHostPublicity,
) -> Result<DeleteFileData, FileHostingError> {
self.bucket
self.get_bucket(file_publicity)
.delete_object(format!("/{file_name}"))
.await
.map_err(|err| {
FileHostingError::S3Error(format!(
"Error while deleting file {file_name} to S3: {err}"
))
})?;
.map_err(|e| FileHostingError::S3Error("deleting file", e))?;
Ok(DeleteFileData {
file_id: file_id.to_string(),
file_name: file_name.to_string(),
})
}

View File

@@ -334,7 +334,7 @@ pub fn app_config(
pub fn check_env_vars() -> bool {
let mut failed = false;
fn check_var<T: std::str::FromStr>(var: &'static str) -> bool {
fn check_var<T: std::str::FromStr>(var: &str) -> bool {
let check = parse_var::<T>(var).is_none();
if check {
warn!(
@@ -361,25 +361,33 @@ pub fn check_env_vars() -> bool {
let storage_backend = dotenvy::var("STORAGE_BACKEND").ok();
match storage_backend.as_deref() {
Some("backblaze") => {
failed |= check_var::<String>("BACKBLAZE_KEY_ID");
failed |= check_var::<String>("BACKBLAZE_KEY");
failed |= check_var::<String>("BACKBLAZE_BUCKET_ID");
}
Some("s3") => {
failed |= check_var::<String>("S3_ACCESS_TOKEN");
failed |= check_var::<String>("S3_SECRET");
failed |= check_var::<String>("S3_URL");
failed |= check_var::<String>("S3_REGION");
failed |= check_var::<String>("S3_BUCKET_NAME");
let mut check_var_set = |var_prefix| {
failed |= check_var::<String>(&format!(
"S3_{var_prefix}_BUCKET_NAME"
));
failed |= check_var::<bool>(&format!(
"S3_{var_prefix}_USES_PATH_STYLE_BUCKET"
));
failed |=
check_var::<String>(&format!("S3_{var_prefix}_REGION"));
failed |= check_var::<String>(&format!("S3_{var_prefix}_URL"));
failed |= check_var::<String>(&format!(
"S3_{var_prefix}_ACCESS_TOKEN"
));
failed |=
check_var::<String>(&format!("S3_{var_prefix}_SECRET"));
};
check_var_set("PUBLIC");
check_var_set("PRIVATE");
}
Some("local") => {
failed |= check_var::<String>("MOCK_FILE_PATH");
}
Some(backend) => {
warn!(
"Variable `STORAGE_BACKEND` contains an invalid value: {}. Expected \"backblaze\", \"s3\", or \"local\".",
backend
"Variable `STORAGE_BACKEND` contains an invalid value: {backend}. Expected \"s3\" or \"local\"."
);
failed |= true;
}

View File

@@ -4,8 +4,9 @@ use actix_web_prom::PrometheusMetricsBuilder;
use clap::Parser;
use labrinth::background_task::BackgroundTask;
use labrinth::database::redis::RedisPool;
use labrinth::file_hosting::S3Host;
use labrinth::file_hosting::{S3BucketConfig, S3Host};
use labrinth::search;
use labrinth::util::env::parse_var;
use labrinth::util::ratelimit::rate_limit_middleware;
use labrinth::{check_env_vars, clickhouse, database, file_hosting, queue};
use std::ffi::CStr;
@@ -51,6 +52,7 @@ async fn main() -> std::io::Result<()> {
if check_env_vars() {
error!("Some environment variables are missing!");
std::process::exit(1);
}
// DSN is from SENTRY_DSN env variable.
@@ -93,24 +95,33 @@ async fn main() -> std::io::Result<()> {
let file_host: Arc<dyn file_hosting::FileHost + Send + Sync> =
match storage_backend.as_str() {
"backblaze" => Arc::new(
file_hosting::BackblazeHost::new(
&dotenvy::var("BACKBLAZE_KEY_ID").unwrap(),
&dotenvy::var("BACKBLAZE_KEY").unwrap(),
&dotenvy::var("BACKBLAZE_BUCKET_ID").unwrap(),
"s3" => {
let config_from_env = |bucket_type| S3BucketConfig {
name: parse_var(&format!("S3_{bucket_type}_BUCKET_NAME"))
.unwrap(),
uses_path_style: parse_var(&format!(
"S3_{bucket_type}_USES_PATH_STYLE_BUCKET"
))
.unwrap(),
region: parse_var(&format!("S3_{bucket_type}_REGION"))
.unwrap(),
url: parse_var(&format!("S3_{bucket_type}_URL")).unwrap(),
access_token: parse_var(&format!(
"S3_{bucket_type}_ACCESS_TOKEN"
))
.unwrap(),
secret: parse_var(&format!("S3_{bucket_type}_SECRET"))
.unwrap(),
};
Arc::new(
S3Host::new(
config_from_env("PUBLIC"),
config_from_env("PRIVATE"),
)
.unwrap(),
)
.await,
),
"s3" => Arc::new(
S3Host::new(
&dotenvy::var("S3_BUCKET_NAME").unwrap(),
&dotenvy::var("S3_REGION").unwrap(),
&dotenvy::var("S3_URL").unwrap(),
&dotenvy::var("S3_ACCESS_TOKEN").unwrap(),
&dotenvy::var("S3_SECRET").unwrap(),
)
.unwrap(),
),
}
"local" => Arc::new(file_hosting::MockHost::new()),
_ => panic!("Invalid storage backend specified. Aborting startup!"),
};

View File

@@ -16,6 +16,7 @@ pub use v3::payouts;
pub use v3::projects;
pub use v3::reports;
pub use v3::sessions;
pub use v3::shared_instances;
pub use v3::teams;
pub use v3::threads;
pub use v3::users;

View File

@@ -17,6 +17,8 @@ base62_id!(ProductPriceId);
base62_id!(ProjectId);
base62_id!(ReportId);
base62_id!(SessionId);
base62_id!(SharedInstanceId);
base62_id!(SharedInstanceVersionId);
base62_id!(TeamId);
base62_id!(TeamMemberId);
base62_id!(ThreadId);

View File

@@ -12,6 +12,7 @@ pub mod payouts;
pub mod projects;
pub mod reports;
pub mod sessions;
pub mod shared_instances;
pub mod teams;
pub mod threads;
pub mod users;

View File

@@ -100,6 +100,24 @@ bitflags::bitflags! {
// only accessible by modrinth-issued sessions
const SESSION_ACCESS = 1 << 39;
// create a shared instance
const SHARED_INSTANCE_CREATE = 1 << 40;
// read a shared instance
const SHARED_INSTANCE_READ = 1 << 41;
// write to a shared instance
const SHARED_INSTANCE_WRITE = 1 << 42;
// delete a shared instance
const SHARED_INSTANCE_DELETE = 1 << 43;
// create a shared instance version
const SHARED_INSTANCE_VERSION_CREATE = 1 << 44;
// read a shared instance version
const SHARED_INSTANCE_VERSION_READ = 1 << 45;
// write to a shared instance version
const SHARED_INSTANCE_VERSION_WRITE = 1 << 46;
// delete a shared instance version
const SHARED_INSTANCE_VERSION_DELETE = 1 << 47;
const NONE = 0b0;
}
}

View File

@@ -0,0 +1,89 @@
use crate::bitflags_serde_impl;
use crate::database::models::shared_instance_item::{
DBSharedInstance, DBSharedInstanceUser, DBSharedInstanceVersion,
};
use crate::models::ids::{SharedInstanceId, SharedInstanceVersionId};
use ariadne::ids::UserId;
use bitflags::bitflags;
use chrono::{DateTime, Utc};
use hex::ToHex;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SharedInstance {
pub id: SharedInstanceId,
pub title: String,
pub owner: UserId,
pub public: bool,
pub current_version: Option<SharedInstanceVersion>,
#[serde(skip_serializing_if = "Option::is_none")]
pub additional_users: Option<Vec<SharedInstanceUser>>,
}
impl SharedInstance {
pub fn from_db(
instance: DBSharedInstance,
users: Option<Vec<DBSharedInstanceUser>>,
current_version: Option<DBSharedInstanceVersion>,
) -> Self {
SharedInstance {
id: instance.id.into(),
title: instance.title,
owner: instance.owner_id.into(),
public: instance.public,
current_version: current_version.map(Into::into),
additional_users: users
.map(|x| x.into_iter().map(Into::into).collect()),
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SharedInstanceVersion {
pub id: SharedInstanceVersionId,
pub shared_instance: SharedInstanceId,
pub size: u64,
pub sha512: String,
pub created: DateTime<Utc>,
}
impl From<DBSharedInstanceVersion> for SharedInstanceVersion {
fn from(value: DBSharedInstanceVersion) -> Self {
let version_id = value.id.into();
let shared_instance_id = value.shared_instance_id.into();
SharedInstanceVersion {
id: version_id,
shared_instance: shared_instance_id,
size: value.size,
sha512: value.sha512.encode_hex(),
created: value.created,
}
}
}
bitflags! {
#[derive(Copy, Clone, Debug)]
pub struct SharedInstanceUserPermissions: u64 {
const EDIT = 1 << 0;
const DELETE = 1 << 1;
const UPLOAD_VERSION = 1 << 2;
const DELETE_VERSION = 1 << 3;
}
}
bitflags_serde_impl!(SharedInstanceUserPermissions, u64);
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SharedInstanceUser {
pub user: UserId,
pub permissions: SharedInstanceUserPermissions,
}
impl From<DBSharedInstanceUser> for SharedInstanceUser {
fn from(user: DBSharedInstanceUser) -> Self {
SharedInstanceUser {
user: user.user_id.into(),
permissions: user.permissions,
}
}
}

View File

@@ -4,7 +4,7 @@ use crate::auth::{AuthProvider, AuthenticationError, get_user_from_headers};
use crate::database::models::DBUser;
use crate::database::models::flow_item::DBFlow;
use crate::database::redis::RedisPool;
use crate::file_hosting::FileHost;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models::pats::Scopes;
use crate::models::users::{Badges, Role};
use crate::queue::session::AuthQueue;
@@ -136,6 +136,7 @@ impl TempUser {
let upload_result = upload_image_optimized(
&format!("user/{}", ariadne::ids::UserId::from(user_id)),
FileHostPublicity::Public,
bytes,
ext,
Some(96),
@@ -2030,7 +2031,9 @@ pub async fn change_password(
Some(user)
} else {
None
return Err(ApiError::CustomAuthentication(
"The password change flow code is invalid or has expired. Did you copy it promptly and correctly?".to_string(),
));
}
} else {
None

View File

@@ -4,7 +4,7 @@ use crate::database::models::{
collection_item, generate_collection_id, project_item,
};
use crate::database::redis::RedisPool;
use crate::file_hosting::FileHost;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models::collections::{Collection, CollectionStatus};
use crate::models::ids::{CollectionId, ProjectId};
use crate::models::pats::Scopes;
@@ -12,7 +12,7 @@ use crate::queue::session::AuthQueue;
use crate::routes::ApiError;
use crate::routes::v3::project_creation::CreateError;
use crate::util::img::delete_old_images;
use crate::util::routes::read_from_payload;
use crate::util::routes::read_limited_from_payload;
use crate::util::validate::validation_errors_to_string;
use crate::{database, models};
use actix_web::web::Data;
@@ -413,11 +413,12 @@ pub async fn collection_icon_edit(
delete_old_images(
collection_item.icon_url,
collection_item.raw_icon_url,
FileHostPublicity::Public,
&***file_host,
)
.await?;
let bytes = read_from_payload(
let bytes = read_limited_from_payload(
&mut payload,
262144,
"Icons must be smaller than 256KiB",
@@ -427,6 +428,7 @@ pub async fn collection_icon_edit(
let collection_id: CollectionId = collection_item.id.into();
let upload_result = crate::util::img::upload_image_optimized(
&format!("data/{collection_id}"),
FileHostPublicity::Public,
bytes.freeze(),
&ext.ext,
Some(96),
@@ -493,6 +495,7 @@ pub async fn delete_collection_icon(
delete_old_images(
collection_item.icon_url,
collection_item.raw_icon_url,
FileHostPublicity::Public,
&***file_host,
)
.await?;

View File

@@ -8,13 +8,13 @@ use crate::database::models::{
project_item, report_item, thread_item, version_item,
};
use crate::database::redis::RedisPool;
use crate::file_hosting::FileHost;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models::ids::{ReportId, ThreadMessageId, VersionId};
use crate::models::images::{Image, ImageContext};
use crate::queue::session::AuthQueue;
use crate::routes::ApiError;
use crate::util::img::upload_image_optimized;
use crate::util::routes::read_from_payload;
use crate::util::routes::read_limited_from_payload;
use actix_web::{HttpRequest, HttpResponse, web};
use serde::{Deserialize, Serialize};
use sqlx::PgPool;
@@ -176,7 +176,7 @@ pub async fn images_add(
}
// Upload the image to the file host
let bytes = read_from_payload(
let bytes = read_limited_from_payload(
&mut payload,
1_048_576,
"Icons must be smaller than 1MiB",
@@ -186,6 +186,7 @@ pub async fn images_add(
let content_length = bytes.len();
let upload_result = upload_image_optimized(
"data/cached_images",
FileHostPublicity::Public, // FIXME: Maybe use private images for threads
bytes.freeze(),
&data.ext,
None,

View File

@@ -13,6 +13,8 @@ pub mod payouts;
pub mod project_creation;
pub mod projects;
pub mod reports;
pub mod shared_instance_version_creation;
pub mod shared_instances;
pub mod statistics;
pub mod tags;
pub mod teams;
@@ -36,6 +38,8 @@ pub fn config(cfg: &mut web::ServiceConfig) {
.configure(project_creation::config)
.configure(projects::config)
.configure(reports::config)
.configure(shared_instance_version_creation::config)
.configure(shared_instances::config)
.configure(statistics::config)
.configure(tags::config)
.configure(teams::config)

View File

@@ -1,6 +1,9 @@
use std::{collections::HashSet, fmt::Display, sync::Arc};
use super::ApiError;
use crate::file_hosting::FileHostPublicity;
use crate::models::ids::OAuthClientId;
use crate::util::img::{delete_old_images, upload_image_optimized};
use crate::{
auth::{checks::ValidateAuthorized, get_user_from_headers},
database::{
@@ -23,7 +26,7 @@ use crate::{
};
use crate::{
file_hosting::FileHost, models::oauth_clients::DeleteOAuthClientQueryParam,
util::routes::read_from_payload,
util::routes::read_limited_from_payload,
};
use actix_web::{
HttpRequest, HttpResponse, delete, get, patch, post,
@@ -38,9 +41,6 @@ use serde::{Deserialize, Serialize};
use sqlx::PgPool;
use validator::Validate;
use crate::models::ids::OAuthClientId;
use crate::util::img::{delete_old_images, upload_image_optimized};
pub fn config(cfg: &mut web::ServiceConfig) {
cfg.service(
scope("oauth")
@@ -381,11 +381,12 @@ pub async fn oauth_client_icon_edit(
delete_old_images(
client.icon_url.clone(),
client.raw_icon_url.clone(),
FileHostPublicity::Public,
&***file_host,
)
.await?;
let bytes = read_from_payload(
let bytes = read_limited_from_payload(
&mut payload,
262144,
"Icons must be smaller than 256KiB",
@@ -393,6 +394,7 @@ pub async fn oauth_client_icon_edit(
.await?;
let upload_result = upload_image_optimized(
&format!("data/{client_id}"),
FileHostPublicity::Public,
bytes.freeze(),
&ext.ext,
Some(96),
@@ -447,6 +449,7 @@ pub async fn oauth_client_icon_delete(
delete_old_images(
client.icon_url.clone(),
client.raw_icon_url.clone(),
FileHostPublicity::Public,
&***file_host,
)
.await?;

View File

@@ -8,14 +8,14 @@ use crate::database::models::{
DBOrganization, generate_organization_id, team_item,
};
use crate::database::redis::RedisPool;
use crate::file_hosting::FileHost;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models::ids::OrganizationId;
use crate::models::pats::Scopes;
use crate::models::teams::{OrganizationPermissions, ProjectPermissions};
use crate::queue::session::AuthQueue;
use crate::routes::v3::project_creation::CreateError;
use crate::util::img::delete_old_images;
use crate::util::routes::read_from_payload;
use crate::util::routes::read_limited_from_payload;
use crate::util::validate::validation_errors_to_string;
use crate::{database, models};
use actix_web::{HttpRequest, HttpResponse, web};
@@ -667,8 +667,13 @@ pub async fn organization_delete(
)
.await?;
for team_id in organization_project_teams {
database::models::DBTeamMember::clear_cache(team_id, &redis).await?;
for team_id in &organization_project_teams {
database::models::DBTeamMember::clear_cache(*team_id, &redis).await?;
}
if !organization_project_teams.is_empty() {
database::models::DBUser::clear_project_cache(&[owner_id], &redis)
.await?;
}
if result.is_some() {
@@ -1088,11 +1093,12 @@ pub async fn organization_icon_edit(
delete_old_images(
organization_item.icon_url,
organization_item.raw_icon_url,
FileHostPublicity::Public,
&***file_host,
)
.await?;
let bytes = read_from_payload(
let bytes = read_limited_from_payload(
&mut payload,
262144,
"Icons must be smaller than 256KiB",
@@ -1102,6 +1108,7 @@ pub async fn organization_icon_edit(
let organization_id: OrganizationId = organization_item.id.into();
let upload_result = crate::util::img::upload_image_optimized(
&format!("data/{organization_id}"),
FileHostPublicity::Public,
bytes.freeze(),
&ext.ext,
Some(96),
@@ -1191,6 +1198,7 @@ pub async fn delete_organization_icon(
delete_old_images(
organization_item.icon_url,
organization_item.raw_icon_url,
FileHostPublicity::Public,
&***file_host,
)
.await?;

View File

@@ -6,7 +6,7 @@ use crate::database::models::loader_fields::{
use crate::database::models::thread_item::ThreadBuilder;
use crate::database::models::{self, DBUser, image_item};
use crate::database::redis::RedisPool;
use crate::file_hosting::{FileHost, FileHostingError};
use crate::file_hosting::{FileHost, FileHostPublicity, FileHostingError};
use crate::models::error::ApiError;
use crate::models::ids::{ImageId, OrganizationId, ProjectId, VersionId};
use crate::models::images::{Image, ImageContext};
@@ -240,18 +240,16 @@ pub struct NewGalleryItem {
}
pub struct UploadedFile {
pub file_id: String,
pub file_name: String,
pub name: String,
pub publicity: FileHostPublicity,
}
pub async fn undo_uploads(
file_host: &dyn FileHost,
uploaded_files: &[UploadedFile],
) -> Result<(), CreateError> {
) -> Result<(), FileHostingError> {
for file in uploaded_files {
file_host
.delete_file_version(&file.file_id, &file.file_name)
.await?;
file_host.delete_file(&file.name, file.publicity).await?;
}
Ok(())
}
@@ -309,13 +307,13 @@ Get logged in user
2. Upload
- Icon: check file format & size
- Upload to backblaze & record URL
- Upload to S3 & record URL
- Project files
- Check for matching version
- File size limits?
- Check file type
- Eventually, malware scan
- Upload to backblaze & create VersionFileBuilder
- Upload to S3 & create VersionFileBuilder
-
3. Creation
@@ -334,7 +332,7 @@ async fn project_create_inner(
redis: &RedisPool,
session_queue: &AuthQueue,
) -> Result<HttpResponse, CreateError> {
// The base URL for files uploaded to backblaze
// The base URL for files uploaded to S3
let cdn_url = dotenvy::var("CDN_URL")?;
// The currently logged in user
@@ -516,6 +514,7 @@ async fn project_create_inner(
let url = format!("data/{project_id}/images");
let upload_result = upload_image_optimized(
&url,
FileHostPublicity::Public,
data.freeze(),
file_extension,
Some(350),
@@ -526,8 +525,8 @@ async fn project_create_inner(
.map_err(|e| CreateError::InvalidIconFormat(e.to_string()))?;
uploaded_files.push(UploadedFile {
file_id: upload_result.raw_url_path.clone(),
file_name: upload_result.raw_url_path,
name: upload_result.raw_url_path,
publicity: FileHostPublicity::Public,
});
gallery_urls.push(crate::models::projects::GalleryItem {
url: upload_result.url,
@@ -1010,6 +1009,7 @@ async fn process_icon_upload(
.await?;
let upload_result = crate::util::img::upload_image_optimized(
&format!("data/{}", to_base62(id)),
FileHostPublicity::Public,
data.freeze(),
file_extension,
Some(96),
@@ -1020,13 +1020,13 @@ async fn process_icon_upload(
.map_err(|e| CreateError::InvalidIconFormat(e.to_string()))?;
uploaded_files.push(UploadedFile {
file_id: upload_result.raw_url_path.clone(),
file_name: upload_result.raw_url_path,
name: upload_result.raw_url_path,
publicity: FileHostPublicity::Public,
});
uploaded_files.push(UploadedFile {
file_id: upload_result.url_path.clone(),
file_name: upload_result.url_path,
name: upload_result.url_path,
publicity: FileHostPublicity::Public,
});
Ok((

View File

@@ -9,7 +9,7 @@ use crate::database::models::thread_item::ThreadMessageBuilder;
use crate::database::models::{DBTeamMember, ids as db_ids, image_item};
use crate::database::redis::RedisPool;
use crate::database::{self, models as db_models};
use crate::file_hosting::FileHost;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models;
use crate::models::ids::ProjectId;
use crate::models::images::ImageContext;
@@ -28,7 +28,7 @@ use crate::search::indexing::remove_documents;
use crate::search::{SearchConfig, SearchError, search_for_project};
use crate::util::img;
use crate::util::img::{delete_old_images, upload_image_optimized};
use crate::util::routes::read_from_payload;
use crate::util::routes::read_limited_from_payload;
use crate::util::validate::validation_errors_to_string;
use actix_web::{HttpRequest, HttpResponse, web};
use ariadne::ids::base62_impl::parse_base62;
@@ -1487,11 +1487,12 @@ pub async fn project_icon_edit(
delete_old_images(
project_item.inner.icon_url,
project_item.inner.raw_icon_url,
FileHostPublicity::Public,
&***file_host,
)
.await?;
let bytes = read_from_payload(
let bytes = read_limited_from_payload(
&mut payload,
262144,
"Icons must be smaller than 256KiB",
@@ -1501,6 +1502,7 @@ pub async fn project_icon_edit(
let project_id: ProjectId = project_item.inner.id.into();
let upload_result = upload_image_optimized(
&format!("data/{project_id}"),
FileHostPublicity::Public,
bytes.freeze(),
&ext.ext,
Some(96),
@@ -1597,6 +1599,7 @@ pub async fn delete_project_icon(
delete_old_images(
project_item.inner.icon_url,
project_item.inner.raw_icon_url,
FileHostPublicity::Public,
&***file_host,
)
.await?;
@@ -1709,7 +1712,7 @@ pub async fn add_gallery_item(
}
}
let bytes = read_from_payload(
let bytes = read_limited_from_payload(
&mut payload,
2 * (1 << 20),
"Gallery image exceeds the maximum of 2MiB.",
@@ -1719,6 +1722,7 @@ pub async fn add_gallery_item(
let id: ProjectId = project_item.inner.id.into();
let upload_result = upload_image_optimized(
&format!("data/{id}/images"),
FileHostPublicity::Public,
bytes.freeze(),
&ext.ext,
Some(350),
@@ -2049,6 +2053,7 @@ pub async fn delete_gallery_item(
delete_old_images(
Some(item.image_url),
Some(item.raw_image_url),
FileHostPublicity::Public,
&***file_host,
)
.await?;

View File

@@ -14,11 +14,11 @@ use crate::models::threads::{MessageBody, ThreadType};
use crate::queue::session::AuthQueue;
use crate::routes::ApiError;
use crate::util::img;
use crate::util::routes::read_typed_from_payload;
use actix_web::{HttpRequest, HttpResponse, web};
use ariadne::ids::UserId;
use ariadne::ids::base62_impl::parse_base62;
use chrono::Utc;
use futures::StreamExt;
use serde::Deserialize;
use sqlx::PgPool;
use validator::Validate;
@@ -63,15 +63,7 @@ pub async fn report_create(
.await?
.1;
let mut bytes = web::BytesMut::new();
while let Some(item) = body.next().await {
bytes.extend_from_slice(&item.map_err(|_| {
ApiError::InvalidInput(
"Error while parsing request payload!".to_string(),
)
})?);
}
let new_report: CreateReport = serde_json::from_slice(bytes.as_ref())?;
let new_report: CreateReport = read_typed_from_payload(&mut body).await?;
let id =
crate::database::models::generate_report_id(&mut transaction).await?;

View File

@@ -0,0 +1,200 @@
use crate::auth::get_user_from_headers;
use crate::database::models::shared_instance_item::{
DBSharedInstance, DBSharedInstanceUser, DBSharedInstanceVersion,
};
use crate::database::models::{
DBSharedInstanceId, DBSharedInstanceVersionId,
generate_shared_instance_version_id,
};
use crate::database::redis::RedisPool;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models::ids::{SharedInstanceId, SharedInstanceVersionId};
use crate::models::pats::Scopes;
use crate::models::shared_instances::{
SharedInstanceUserPermissions, SharedInstanceVersion,
};
use crate::queue::session::AuthQueue;
use crate::routes::ApiError;
use crate::routes::v3::project_creation::UploadedFile;
use crate::util::ext::MRPACK_MIME_TYPE;
use actix_web::http::header::ContentLength;
use actix_web::web::Data;
use actix_web::{HttpRequest, HttpResponse, web};
use bytes::BytesMut;
use chrono::Utc;
use futures_util::StreamExt;
use hex::FromHex;
use sqlx::{PgPool, Postgres, Transaction};
use std::sync::Arc;
const MAX_FILE_SIZE: usize = 500 * 1024 * 1024;
const MAX_FILE_SIZE_TEXT: &str = "500 MB";
pub fn config(cfg: &mut web::ServiceConfig) {
cfg.route(
"shared-instance/{id}/version",
web::post().to(shared_instance_version_create),
);
}
#[allow(clippy::too_many_arguments)]
pub async fn shared_instance_version_create(
req: HttpRequest,
pool: Data<PgPool>,
payload: web::Payload,
web::Header(ContentLength(content_length)): web::Header<ContentLength>,
redis: Data<RedisPool>,
file_host: Data<Arc<dyn FileHost + Send + Sync>>,
info: web::Path<(SharedInstanceId,)>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
if content_length > MAX_FILE_SIZE {
return Err(ApiError::InvalidInput(format!(
"File size exceeds the maximum limit of {MAX_FILE_SIZE_TEXT}"
)));
}
let mut transaction = pool.begin().await?;
let mut uploaded_files = vec![];
let result = shared_instance_version_create_inner(
req,
&pool,
payload,
content_length,
&redis,
&***file_host,
info.into_inner().0.into(),
&session_queue,
&mut transaction,
&mut uploaded_files,
)
.await;
if result.is_err() {
let undo_result = super::project_creation::undo_uploads(
&***file_host,
&uploaded_files,
)
.await;
let rollback_result = transaction.rollback().await;
undo_result?;
if let Err(e) = rollback_result {
return Err(e.into());
}
} else {
transaction.commit().await?;
}
result
}
#[allow(clippy::too_many_arguments)]
async fn shared_instance_version_create_inner(
req: HttpRequest,
pool: &PgPool,
mut payload: web::Payload,
content_length: usize,
redis: &RedisPool,
file_host: &dyn FileHost,
instance_id: DBSharedInstanceId,
session_queue: &AuthQueue,
transaction: &mut Transaction<'_, Postgres>,
uploaded_files: &mut Vec<UploadedFile>,
) -> Result<HttpResponse, ApiError> {
let user = get_user_from_headers(
&req,
pool,
redis,
session_queue,
Scopes::SHARED_INSTANCE_VERSION_CREATE,
)
.await?
.1;
let Some(instance) = DBSharedInstance::get(instance_id, pool).await? else {
return Err(ApiError::NotFound);
};
if !user.role.is_mod() && instance.owner_id != user.id.into() {
let permissions = DBSharedInstanceUser::get_user_permissions(
instance_id,
user.id.into(),
pool,
)
.await?;
if let Some(permissions) = permissions {
if !permissions
.contains(SharedInstanceUserPermissions::UPLOAD_VERSION)
{
return Err(ApiError::CustomAuthentication(
"You do not have permission to upload a version for this shared instance.".to_string()
));
}
} else {
return Err(ApiError::NotFound);
}
}
let version_id =
generate_shared_instance_version_id(&mut *transaction).await?;
let mut file_data = BytesMut::new();
while let Some(chunk) = payload.next().await {
let chunk = chunk.map_err(|_| {
ApiError::InvalidInput(
"Unable to parse bytes in payload sent!".to_string(),
)
})?;
if file_data.len() + chunk.len() <= MAX_FILE_SIZE {
file_data.extend_from_slice(&chunk);
} else {
file_data
.extend_from_slice(&chunk[..MAX_FILE_SIZE - file_data.len()]);
break;
}
}
let file_data = file_data.freeze();
let file_path = format!(
"shared_instance/{}.mrpack",
SharedInstanceVersionId::from(version_id),
);
let upload_data = file_host
.upload_file(
MRPACK_MIME_TYPE,
&file_path,
FileHostPublicity::Private,
file_data,
)
.await?;
uploaded_files.push(UploadedFile {
name: file_path,
publicity: upload_data.file_publicity,
});
let sha512 = Vec::<u8>::from_hex(upload_data.content_sha512).unwrap();
let new_version = DBSharedInstanceVersion {
id: version_id,
shared_instance_id: instance_id,
size: content_length as u64,
sha512,
created: Utc::now(),
};
new_version.insert(transaction).await?;
sqlx::query!(
"UPDATE shared_instances SET current_version_id = $1 WHERE id = $2",
new_version.id as DBSharedInstanceVersionId,
instance_id as DBSharedInstanceId,
)
.execute(&mut **transaction)
.await?;
let version: SharedInstanceVersion = new_version.into();
Ok(HttpResponse::Created().json(version))
}

View File

@@ -0,0 +1,612 @@
use crate::auth::get_user_from_headers;
use crate::auth::validate::get_maybe_user_from_headers;
use crate::database::models::shared_instance_item::{
DBSharedInstance, DBSharedInstanceUser, DBSharedInstanceVersion,
};
use crate::database::models::{
DBSharedInstanceId, DBSharedInstanceVersionId, generate_shared_instance_id,
};
use crate::database::redis::RedisPool;
use crate::file_hosting::FileHost;
use crate::models::ids::{SharedInstanceId, SharedInstanceVersionId};
use crate::models::pats::Scopes;
use crate::models::shared_instances::{
SharedInstance, SharedInstanceUserPermissions, SharedInstanceVersion,
};
use crate::models::users::User;
use crate::queue::session::AuthQueue;
use crate::routes::ApiError;
use crate::util::routes::read_typed_from_payload;
use actix_web::web::{Data, Redirect};
use actix_web::{HttpRequest, HttpResponse, web};
use futures_util::future::try_join_all;
use serde::Deserialize;
use sqlx::PgPool;
use std::sync::Arc;
use validator::Validate;
pub fn config(cfg: &mut web::ServiceConfig) {
cfg.route("shared-instance", web::post().to(shared_instance_create));
cfg.route("shared-instance", web::get().to(shared_instance_list));
cfg.service(
web::scope("shared-instance")
.route("{id}", web::get().to(shared_instance_get))
.route("{id}", web::patch().to(shared_instance_edit))
.route("{id}", web::delete().to(shared_instance_delete))
.route("{id}/version", web::get().to(shared_instance_version_list)),
);
cfg.service(
web::scope("shared-instance-version")
.route("{id}", web::get().to(shared_instance_version_get))
.route("{id}", web::delete().to(shared_instance_version_delete))
.route(
"{id}/download",
web::get().to(shared_instance_version_download),
),
);
}
#[derive(Deserialize, Validate)]
pub struct CreateSharedInstance {
#[validate(
length(min = 3, max = 64),
custom(function = "crate::util::validate::validate_name")
)]
pub title: String,
#[serde(default)]
pub public: bool,
}
pub async fn shared_instance_create(
req: HttpRequest,
pool: Data<PgPool>,
mut body: web::Payload,
redis: Data<RedisPool>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
let new_instance: CreateSharedInstance =
read_typed_from_payload(&mut body).await?;
let mut transaction = pool.begin().await?;
let user = get_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_CREATE,
)
.await?
.1;
let id = generate_shared_instance_id(&mut transaction).await?;
let instance = DBSharedInstance {
id,
title: new_instance.title,
owner_id: user.id.into(),
public: new_instance.public,
current_version_id: None,
};
instance.insert(&mut transaction).await?;
transaction.commit().await?;
Ok(HttpResponse::Created().json(SharedInstance {
id: id.into(),
title: instance.title,
owner: user.id,
public: instance.public,
current_version: None,
additional_users: Some(vec![]),
}))
}
pub async fn shared_instance_list(
req: HttpRequest,
pool: Data<PgPool>,
redis: Data<RedisPool>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
let user = get_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_READ,
)
.await?
.1;
// TODO: Something for moderators to be able to see all instances?
let instances =
DBSharedInstance::list_for_user(user.id.into(), &**pool).await?;
let instances = try_join_all(instances.into_iter().map(
async |instance| -> Result<SharedInstance, ApiError> {
let version = if let Some(version_id) = instance.current_version_id
{
DBSharedInstanceVersion::get(version_id, &**pool).await?
} else {
None
};
let instance_id = instance.id;
Ok(SharedInstance::from_db(
instance,
Some(
DBSharedInstanceUser::get_from_instance(
instance_id,
&**pool,
&redis,
)
.await?,
),
version,
))
},
))
.await?;
Ok(HttpResponse::Ok().json(instances))
}
pub async fn shared_instance_get(
req: HttpRequest,
pool: Data<PgPool>,
redis: Data<RedisPool>,
info: web::Path<(SharedInstanceId,)>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
let id = info.into_inner().0.into();
let user = get_maybe_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_READ,
)
.await?
.map(|(_, user)| user);
let shared_instance = DBSharedInstance::get(id, &**pool).await?;
if let Some(shared_instance) = shared_instance {
let users =
DBSharedInstanceUser::get_from_instance(id, &**pool, &redis)
.await?;
let privately_accessible = user.is_some_and(|user| {
can_access_instance_privately(&shared_instance, &users, &user)
});
if !shared_instance.public && !privately_accessible {
return Err(ApiError::NotFound);
}
let current_version =
if let Some(version_id) = shared_instance.current_version_id {
DBSharedInstanceVersion::get(version_id, &**pool).await?
} else {
None
};
let shared_instance = SharedInstance::from_db(
shared_instance,
privately_accessible.then_some(users),
current_version,
);
Ok(HttpResponse::Ok().json(shared_instance))
} else {
Err(ApiError::NotFound)
}
}
fn can_access_instance_privately(
instance: &DBSharedInstance,
users: &[DBSharedInstanceUser],
user: &User,
) -> bool {
user.role.is_mod()
|| instance.owner_id == user.id.into()
|| users.iter().any(|x| x.user_id == user.id.into())
}
#[derive(Deserialize, Validate)]
pub struct EditSharedInstance {
#[validate(
length(min = 3, max = 64),
custom(function = "crate::util::validate::validate_name")
)]
pub title: Option<String>,
pub public: Option<bool>,
}
pub async fn shared_instance_edit(
req: HttpRequest,
pool: Data<PgPool>,
mut body: web::Payload,
redis: Data<RedisPool>,
info: web::Path<(SharedInstanceId,)>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
let id = info.into_inner().0.into();
let edit_instance: EditSharedInstance =
read_typed_from_payload(&mut body).await?;
let mut transaction = pool.begin().await?;
let user = get_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_WRITE,
)
.await?
.1;
let Some(instance) = DBSharedInstance::get(id, &**pool).await? else {
return Err(ApiError::NotFound);
};
if !user.role.is_mod() && instance.owner_id != user.id.into() {
let permissions = DBSharedInstanceUser::get_user_permissions(
id,
user.id.into(),
&**pool,
)
.await?;
if let Some(permissions) = permissions {
if !permissions.contains(SharedInstanceUserPermissions::EDIT) {
return Err(ApiError::CustomAuthentication(
"You do not have permission to edit this shared instance."
.to_string(),
));
}
} else {
return Err(ApiError::NotFound);
}
}
if let Some(title) = edit_instance.title {
sqlx::query!(
"
UPDATE shared_instances
SET title = $1
WHERE id = $2
",
title,
id as DBSharedInstanceId,
)
.execute(&mut *transaction)
.await?;
}
if let Some(public) = edit_instance.public {
sqlx::query!(
"
UPDATE shared_instances
SET public = $1
WHERE id = $2
",
public,
id as DBSharedInstanceId,
)
.execute(&mut *transaction)
.await?;
}
transaction.commit().await?;
Ok(HttpResponse::NoContent().body(""))
}
pub async fn shared_instance_delete(
req: HttpRequest,
pool: Data<PgPool>,
redis: Data<RedisPool>,
info: web::Path<(SharedInstanceId,)>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
let id: DBSharedInstanceId = info.into_inner().0.into();
let user = get_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_DELETE,
)
.await?
.1;
let Some(instance) = DBSharedInstance::get(id, &**pool).await? else {
return Err(ApiError::NotFound);
};
if !user.role.is_mod() && instance.owner_id != user.id.into() {
let permissions = DBSharedInstanceUser::get_user_permissions(
id,
user.id.into(),
&**pool,
)
.await?;
if let Some(permissions) = permissions {
if !permissions.contains(SharedInstanceUserPermissions::DELETE) {
return Err(ApiError::CustomAuthentication(
"You do not have permission to delete this shared instance.".to_string()
));
}
} else {
return Err(ApiError::NotFound);
}
}
sqlx::query!(
"
DELETE FROM shared_instances
WHERE id = $1
",
id as DBSharedInstanceId,
)
.execute(&**pool)
.await?;
DBSharedInstanceUser::clear_cache(id, &redis).await?;
Ok(HttpResponse::NoContent().body(""))
}
pub async fn shared_instance_version_list(
req: HttpRequest,
pool: Data<PgPool>,
redis: Data<RedisPool>,
info: web::Path<(SharedInstanceId,)>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
let id = info.into_inner().0.into();
let user = get_maybe_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_READ,
)
.await?
.map(|(_, user)| user);
let shared_instance = DBSharedInstance::get(id, &**pool).await?;
if let Some(shared_instance) = shared_instance {
if !can_access_instance_as_maybe_user(
&pool,
&redis,
&shared_instance,
user,
)
.await?
{
return Err(ApiError::NotFound);
}
let versions =
DBSharedInstanceVersion::get_for_instance(id, &**pool).await?;
let versions = versions
.into_iter()
.map(Into::into)
.collect::<Vec<SharedInstanceVersion>>();
Ok(HttpResponse::Ok().json(versions))
} else {
Err(ApiError::NotFound)
}
}
pub async fn shared_instance_version_get(
req: HttpRequest,
pool: Data<PgPool>,
redis: Data<RedisPool>,
info: web::Path<(SharedInstanceVersionId,)>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
let version_id = info.into_inner().0.into();
let user = get_maybe_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_READ,
)
.await?
.map(|(_, user)| user);
let version = DBSharedInstanceVersion::get(version_id, &**pool).await?;
if let Some(version) = version {
let instance =
DBSharedInstance::get(version.shared_instance_id, &**pool).await?;
if let Some(instance) = instance {
if !can_access_instance_as_maybe_user(
&pool, &redis, &instance, user,
)
.await?
{
return Err(ApiError::NotFound);
}
let version: SharedInstanceVersion = version.into();
Ok(HttpResponse::Ok().json(version))
} else {
Err(ApiError::NotFound)
}
} else {
Err(ApiError::NotFound)
}
}
async fn can_access_instance_as_maybe_user(
pool: &PgPool,
redis: &RedisPool,
instance: &DBSharedInstance,
user: Option<User>,
) -> Result<bool, ApiError> {
if instance.public {
return Ok(true);
}
let users =
DBSharedInstanceUser::get_from_instance(instance.id, pool, redis)
.await?;
Ok(user.is_some_and(|user| {
can_access_instance_privately(instance, &users, &user)
}))
}
pub async fn shared_instance_version_delete(
req: HttpRequest,
pool: Data<PgPool>,
redis: Data<RedisPool>,
info: web::Path<(SharedInstanceVersionId,)>,
session_queue: Data<AuthQueue>,
) -> Result<HttpResponse, ApiError> {
let version_id = info.into_inner().0.into();
let user = get_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_VERSION_DELETE,
)
.await?
.1;
let shared_instance_version =
DBSharedInstanceVersion::get(version_id, &**pool).await?;
if let Some(shared_instance_version) = shared_instance_version {
let shared_instance = DBSharedInstance::get(
shared_instance_version.shared_instance_id,
&**pool,
)
.await?;
if let Some(shared_instance) = shared_instance {
if !user.role.is_mod() && shared_instance.owner_id != user.id.into()
{
let permissions = DBSharedInstanceUser::get_user_permissions(
shared_instance.id,
user.id.into(),
&**pool,
)
.await?;
if let Some(permissions) = permissions {
if !permissions
.contains(SharedInstanceUserPermissions::DELETE)
{
return Err(ApiError::CustomAuthentication(
"You do not have permission to delete this shared instance version.".to_string()
));
}
} else {
return Err(ApiError::NotFound);
}
}
delete_instance_version(shared_instance.id, version_id, &pool)
.await?;
Ok(HttpResponse::NoContent().body(""))
} else {
Err(ApiError::NotFound)
}
} else {
Err(ApiError::NotFound)
}
}
async fn delete_instance_version(
instance_id: DBSharedInstanceId,
version_id: DBSharedInstanceVersionId,
pool: &PgPool,
) -> Result<(), ApiError> {
let mut transaction = pool.begin().await?;
sqlx::query!(
"
DELETE FROM shared_instance_versions
WHERE id = $1
",
version_id as DBSharedInstanceVersionId,
)
.execute(&mut *transaction)
.await?;
sqlx::query!(
"
UPDATE shared_instances
SET current_version_id = (
SELECT id FROM shared_instance_versions
WHERE shared_instance_id = $1
ORDER BY created DESC
LIMIT 1
)
WHERE id = $1
",
instance_id as DBSharedInstanceId,
)
.execute(&mut *transaction)
.await?;
transaction.commit().await?;
Ok(())
}
pub async fn shared_instance_version_download(
req: HttpRequest,
pool: Data<PgPool>,
redis: Data<RedisPool>,
file_host: Data<Arc<dyn FileHost + Send + Sync>>,
info: web::Path<(SharedInstanceVersionId,)>,
session_queue: Data<AuthQueue>,
) -> Result<Redirect, ApiError> {
let version_id = info.into_inner().0.into();
let user = get_maybe_user_from_headers(
&req,
&**pool,
&redis,
&session_queue,
Scopes::SHARED_INSTANCE_VERSION_READ,
)
.await?
.map(|(_, user)| user);
let version = DBSharedInstanceVersion::get(version_id, &**pool).await?;
if let Some(version) = version {
let instance =
DBSharedInstance::get(version.shared_instance_id, &**pool).await?;
if let Some(instance) = instance {
if !can_access_instance_as_maybe_user(
&pool, &redis, &instance, user,
)
.await?
{
return Err(ApiError::NotFound);
}
let file_name = format!(
"shared_instance/{}.mrpack",
SharedInstanceVersionId::from(version_id)
);
let url =
file_host.get_url_for_private_file(&file_name, 180).await?;
Ok(Redirect::to(url).see_other())
} else {
Err(ApiError::NotFound)
}
} else {
Err(ApiError::NotFound)
}
}

View File

@@ -6,7 +6,7 @@ use crate::database::models::image_item;
use crate::database::models::notification_item::NotificationBuilder;
use crate::database::models::thread_item::ThreadMessageBuilder;
use crate::database::redis::RedisPool;
use crate::file_hosting::FileHost;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models::ids::{ThreadId, ThreadMessageId};
use crate::models::images::{Image, ImageContext};
use crate::models::notifications::NotificationBody;
@@ -606,7 +606,12 @@ pub async fn message_delete(
for image in images {
let name = image.url.split(&format!("{cdn_url}/")).nth(1);
if let Some(icon_path) = name {
file_host.delete_file_version("", icon_path).await?;
file_host
.delete_file(
icon_path,
FileHostPublicity::Public, // FIXME: Consider using private file storage?
)
.await?;
}
database::DBImage::remove(image.id, &mut transaction, &redis)
.await?;

View File

@@ -1,6 +1,7 @@
use std::{collections::HashMap, sync::Arc};
use super::{ApiError, oauth_clients::get_user_clients};
use crate::file_hosting::FileHostPublicity;
use crate::util::img::delete_old_images;
use crate::{
auth::{filter_visible_projects, get_user_from_headers},
@@ -14,7 +15,10 @@ use crate::{
users::{Badges, Role},
},
queue::session::AuthQueue,
util::{routes::read_from_payload, validate::validation_errors_to_string},
util::{
routes::read_limited_from_payload,
validate::validation_errors_to_string,
},
};
use actix_web::{HttpRequest, HttpResponse, web};
use ariadne::ids::UserId;
@@ -576,11 +580,12 @@ pub async fn user_icon_edit(
delete_old_images(
actual_user.avatar_url,
actual_user.raw_avatar_url,
FileHostPublicity::Public,
&***file_host,
)
.await?;
let bytes = read_from_payload(
let bytes = read_limited_from_payload(
&mut payload,
262144,
"Icons must be smaller than 256KiB",
@@ -590,6 +595,7 @@ pub async fn user_icon_edit(
let user_id: UserId = actual_user.id.into();
let upload_result = crate::util::img::upload_image_optimized(
&format!("data/{user_id}"),
FileHostPublicity::Public,
bytes.freeze(),
&ext.ext,
Some(96),
@@ -648,6 +654,7 @@ pub async fn user_icon_delete(
delete_old_images(
actual_user.avatar_url,
actual_user.raw_avatar_url,
FileHostPublicity::Public,
&***file_host,
)
.await?;

View File

@@ -9,7 +9,7 @@ use crate::database::models::version_item::{
};
use crate::database::models::{self, DBOrganization, image_item};
use crate::database::redis::RedisPool;
use crate::file_hosting::FileHost;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models::ids::{ImageId, ProjectId, VersionId};
use crate::models::images::{Image, ImageContext};
use crate::models::notifications::NotificationBody;
@@ -952,12 +952,12 @@ pub async fn upload_file(
format!("data/{}/versions/{}/{}", project_id, version_id, &file_name);
let upload_data = file_host
.upload_file(content_type, &file_path, data)
.upload_file(content_type, &file_path, FileHostPublicity::Public, data)
.await?;
uploaded_files.push(UploadedFile {
file_id: upload_data.file_id,
file_name: file_path,
name: file_path,
publicity: FileHostPublicity::Public,
});
let sha1_bytes = upload_data.content_sha1.into_bytes();

View File

@@ -1,6 +1,6 @@
use std::str::FromStr;
pub fn parse_var<T: FromStr>(var: &'static str) -> Option<T> {
pub fn parse_var<T: FromStr>(var: &str) -> Option<T> {
dotenvy::var(var).ok().and_then(|i| i.parse().ok())
}
pub fn parse_strings_from_var(var: &'static str) -> Option<Vec<String>> {

View File

@@ -1,3 +1,5 @@
pub const MRPACK_MIME_TYPE: &str = "application/x-modrinth-modpack+zip";
pub fn get_image_content_type(extension: &str) -> Option<&'static str> {
match extension {
"bmp" => Some("image/bmp"),
@@ -24,7 +26,7 @@ pub fn project_file_type(ext: &str) -> Option<&str> {
match ext {
"jar" => Some("application/java-archive"),
"zip" | "litemod" => Some("application/zip"),
"mrpack" => Some("application/x-modrinth-modpack+zip"),
"mrpack" => Some(MRPACK_MIME_TYPE),
_ => None,
}
}

View File

@@ -1,7 +1,7 @@
use crate::database;
use crate::database::models::image_item;
use crate::database::redis::RedisPool;
use crate::file_hosting::FileHost;
use crate::file_hosting::{FileHost, FileHostPublicity};
use crate::models::images::ImageContext;
use crate::routes::ApiError;
use color_thief::ColorFormat;
@@ -38,11 +38,14 @@ pub struct UploadImageResult {
pub raw_url: String,
pub raw_url_path: String,
pub publicity: FileHostPublicity,
pub color: Option<u32>,
}
pub async fn upload_image_optimized(
upload_folder: &str,
publicity: FileHostPublicity,
bytes: bytes::Bytes,
file_extension: &str,
target_width: Option<u32>,
@@ -80,6 +83,7 @@ pub async fn upload_image_optimized(
target_width.unwrap_or(0),
processed_image_ext
),
publicity,
processed_image,
)
.await?,
@@ -92,6 +96,7 @@ pub async fn upload_image_optimized(
.upload_file(
content_type,
&format!("{upload_folder}/{hash}.{file_extension}"),
publicity,
bytes,
)
.await?;
@@ -107,6 +112,9 @@ pub async fn upload_image_optimized(
raw_url: url,
raw_url_path: upload_data.file_name,
publicity,
color,
})
}
@@ -165,6 +173,7 @@ fn convert_to_webp(img: &DynamicImage) -> Result<Vec<u8>, ImageError> {
pub async fn delete_old_images(
image_url: Option<String>,
raw_image_url: Option<String>,
publicity: FileHostPublicity,
file_host: &dyn FileHost,
) -> Result<(), ApiError> {
let cdn_url = dotenvy::var("CDN_URL")?;
@@ -173,7 +182,7 @@ pub async fn delete_old_images(
let name = image_url.split(&cdn_url_start).nth(1);
if let Some(icon_path) = name {
file_host.delete_file_version("", icon_path).await?;
file_host.delete_file(icon_path, publicity).await?;
}
}
@@ -181,7 +190,7 @@ pub async fn delete_old_images(
let name = raw_image_url.split(&cdn_url_start).nth(1);
if let Some(icon_path) = name {
file_host.delete_file_version("", icon_path).await?;
file_host.delete_file(icon_path, publicity).await?;
}
}

View File

@@ -1,11 +1,14 @@
use crate::routes::ApiError;
use crate::routes::v3::project_creation::CreateError;
use crate::util::validate::validation_errors_to_string;
use actix_multipart::Field;
use actix_web::web::Payload;
use bytes::BytesMut;
use futures::StreamExt;
use serde::de::DeserializeOwned;
use validator::Validate;
pub async fn read_from_payload(
pub async fn read_limited_from_payload(
payload: &mut Payload,
cap: usize,
err_msg: &'static str,
@@ -25,6 +28,28 @@ pub async fn read_from_payload(
Ok(bytes)
}
pub async fn read_typed_from_payload<T>(
payload: &mut Payload,
) -> Result<T, ApiError>
where
T: DeserializeOwned + Validate,
{
let mut bytes = BytesMut::new();
while let Some(item) = payload.next().await {
bytes.extend_from_slice(&item.map_err(|_| {
ApiError::InvalidInput(
"Unable to parse bytes in payload sent!".to_string(),
)
})?);
}
let parsed: T = serde_json::from_slice(&bytes)?;
parsed.validate().map_err(|err| {
ApiError::InvalidInput(validation_errors_to_string(err, None))
})?;
Ok(parsed)
}
pub async fn read_from_field(
field: &mut Field,
cap: usize,

View File

@@ -45,7 +45,6 @@ async fn get_webhook_metadata(
project_id: ProjectId,
pool: &PgPool,
redis: &RedisPool,
emoji: bool,
) -> Result<Option<WebhookMetadata>, ApiError> {
let project = crate::database::models::project_item::DBProject::get_id(
project_id.into(),
@@ -159,56 +158,13 @@ async fn get_webhook_metadata(
categories_formatted: project
.categories
.into_iter()
.map(|mut x| format!("{}{x}", x.remove(0).to_uppercase()))
.collect::<Vec<_>>(),
.map(format_category_or_loader)
.collect(),
loaders_formatted: project
.inner
.loaders
.into_iter()
.map(|loader| {
let mut x = if &*loader == "datapack" {
"Data Pack".to_string()
} else if &*loader == "mrpack" {
"Modpack".to_string()
} else {
loader.clone()
};
if emoji {
let emoji_id: i64 = match &*loader {
"bukkit" => 1049793345481883689,
"bungeecord" => 1049793347067314220,
"canvas" => 1107352170656968795,
"datapack" => 1057895494652788866,
"fabric" => 1049793348719890532,
"folia" => 1107348745571537018,
"forge" => 1049793350498275358,
"iris" => 1107352171743281173,
"liteloader" => 1049793351630733333,
"minecraft" => 1049793352964526100,
"modloader" => 1049793353962762382,
"neoforge" => 1140437823783190679,
"optifine" => 1107352174415052901,
"paper" => 1049793355598540810,
"purpur" => 1140436034505674762,
"quilt" => 1049793857681887342,
"rift" => 1049793359373414502,
"spigot" => 1049793413886779413,
"sponge" => 1049793416969605231,
"vanilla" => 1107350794178678855,
"velocity" => 1049793419108700170,
"waterfall" => 1049793420937412638,
_ => 1049805243866681424,
};
format!(
"<:{loader}:{emoji_id}> {}{x}",
x.remove(0).to_uppercase()
)
} else {
format!("{}{x}", x.remove(0).to_uppercase())
}
})
.map(format_category_or_loader)
.collect(),
versions_formatted: formatted_game_versions,
gallery_image: project
@@ -229,7 +185,7 @@ pub async fn send_slack_webhook(
webhook_url: String,
message: Option<String>,
) -> Result<(), ApiError> {
let metadata = get_webhook_metadata(project_id, pool, redis, false).await?;
let metadata = get_webhook_metadata(project_id, pool, redis).await?;
if let Some(metadata) = metadata {
let mut blocks = vec![];
@@ -400,7 +356,7 @@ pub async fn send_discord_webhook(
webhook_url: String,
message: Option<String>,
) -> Result<(), ApiError> {
let metadata = get_webhook_metadata(project_id, pool, redis, true).await?;
let metadata = get_webhook_metadata(project_id, pool, redis).await?;
if let Some(project) = metadata {
let mut fields = vec![];
@@ -619,3 +575,35 @@ fn get_gv_range(
output
}
// Converted from knossos
// See: packages/utils/utils.ts
// https://github.com/modrinth/code/blob/47af459f24e541a844b42b1c8427af6a7b86381e/packages/utils/utils.ts#L147-L196
fn format_category_or_loader(mut x: String) -> String {
match &*x {
"modloader" => "Risugami's ModLoader".to_string(),
"bungeecord" => "BungeeCord".to_string(),
"liteloader" => "LiteLoader".to_string(),
"neoforge" => "NeoForge".to_string(),
"game-mechanics" => "Game Mechanics".to_string(),
"worldgen" => "World Generation".to_string(),
"core-shaders" => "Core Shaders".to_string(),
"gui" => "GUI".to_string(),
"8x-" => "8x or lower".to_string(),
"512x+" => "512x or higher".to_string(),
"kitchen-sink" => "Kitchen Sink".to_string(),
"path-tracing" => "Path Tracing".to_string(),
"pbr" => "PBR".to_string(),
"datapack" => "Data Pack".to_string(),
"colored-lighting" => "Colored Lighting".to_string(),
"optifine" => "OptiFine".to_string(),
"bta-babric" => "BTA (Babric)".to_string(),
"legacy-fabric" => "Legacy Fabric".to_string(),
"java-agent" => "Java Agent".to_string(),
"nilloader" => "NilLoader".to_string(),
"mrpack" => "Modpack".to_string(),
"minecraft" => "Resource Pack".to_string(),
"vanilla" => "Vanilla Shader".to_string(),
_ => format!("{}{x}", x.remove(0).to_uppercase()),
}
}

View File

@@ -23,6 +23,7 @@ quick-xml = { workspace = true, features = ["async-tokio"] }
enumset.workspace = true
chardetng.workspace = true
encoding_rs.workspace = true
hashlink.workspace = true
chrono = { workspace = true, features = ["serde"] }
daedalus.workspace = true
@@ -75,6 +76,9 @@ ariadne.workspace = true
[target.'cfg(windows)'.dependencies]
winreg.workspace = true
[build-dependencies]
dunce.workspace = true
[features]
tauri = ["dep:tauri"]
cli = ["dep:indicatif"]

44
packages/app-lib/build.rs Normal file
View File

@@ -0,0 +1,44 @@
use std::ffi::OsString;
use std::path::PathBuf;
use std::process::{Command, exit};
use std::{env, fs};
fn main() {
println!("cargo::rerun-if-changed=java/gradle");
println!("cargo::rerun-if-changed=java/src");
println!("cargo::rerun-if-changed=java/build.gradle.kts");
println!("cargo::rerun-if-changed=java/settings.gradle.kts");
println!("cargo::rerun-if-changed=java/gradle.properties");
let out_dir =
dunce::canonicalize(PathBuf::from(env::var_os("OUT_DIR").unwrap()))
.unwrap();
println!(
"cargo::rustc-env=JAVA_JARS_DIR={}",
out_dir.join("java/libs").display()
);
let gradle_path = fs::canonicalize(
#[cfg(target_os = "windows")]
"java\\gradlew.bat",
#[cfg(not(target_os = "windows"))]
"java/gradlew",
)
.unwrap();
let mut build_dir_str = OsString::from("-Dorg.gradle.project.buildDir=");
build_dir_str.push(out_dir.join("java"));
let exit_status = Command::new(gradle_path)
.arg(build_dir_str)
.arg("build")
.arg("--no-daemon")
.arg("--console=rich")
.current_dir(dunce::canonicalize("java").unwrap())
.status()
.expect("Failed to wait on Gradle build");
if !exit_status.success() {
println!("cargo::error=Gradle build failed with {exit_status}");
exit(exit_status.code().unwrap_or(1));
}
}

Some files were not shown because too many files have changed in this diff Show More