summaryrefslogtreecommitdiff
path: root/packages/backend/src/server
diff options
context:
space:
mode:
Diffstat (limited to 'packages/backend/src/server')
-rw-r--r--packages/backend/src/server/SkRateLimiterService.md143
-rw-r--r--packages/backend/src/server/api/SkRateLimiterService.ts279
-rw-r--r--packages/backend/src/server/api/endpoints/charts/active-users.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/ap-request.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/drive.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/federation.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/instance.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/notes.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/user/drive.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/user/following.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/user/notes.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/user/pv.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/user/reactions.ts7
-rw-r--r--packages/backend/src/server/api/endpoints/charts/users.ts7
14 files changed, 358 insertions, 148 deletions
diff --git a/packages/backend/src/server/SkRateLimiterService.md b/packages/backend/src/server/SkRateLimiterService.md
new file mode 100644
index 0000000000..762f8dfe14
--- /dev/null
+++ b/packages/backend/src/server/SkRateLimiterService.md
@@ -0,0 +1,143 @@
+# SkRateLimiterService - Leaky Bucket Rate Limit Implementation
+
+SkRateLimiterService replaces Misskey's RateLimiterService for all use cases.
+It offers a simplified API, detailed metrics, and support for Rate Limit headers.
+The prime feature is an implementation of Leaky Bucket - a flexible rate limiting scheme that better supports bursty request patterns common with human interaction.
+
+## Compatibility
+
+The API is backwards-compatible with existing limit definitions, but it's preferred to use the new BucketRateLimit interface.
+Legacy limits will be "translated" into a bucket limit in a way that attempts to respect max, duration, and minInterval (if present).
+SkRateLimiterService is not quite plug-and-play compatible with existing call sites, as it no longer throws when a limit is exceeded.
+Instead, the returned LimitInfo object will have `blocked` set to true.
+Callers are responsible for checking this property and taking any desired action, such as rejecting a request or returning limit details.
+
+## Headers
+
+LimitInfo objects (returned by `SkRateLimitService.limit()`) can be passed to `rate-limit-utils.sendRateLimitHeaders()` to send standard rate limit headers with an HTTP response.
+The defined headers are:
+
+| Header | Definition | Example |
+|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|
+| `X-RateLimit-Remaining` | Number of calls that can be made without triggering the rate limit. Will be zero if the limit is already exceeded, or will be exceeded by the next request. | `X-RateLimit-Remaining: 1` |
+| `X-RateLimit-Clear` | Time in seconds required to completely clear the rate limit "bucket". | `X-RateLimit-Clear: 1.5` |
+| `X-RateLimit-Reset` | Contains the number of seconds to wait before retrying the current request. Clients should delay for at least this long before making another call. Only included if the rate limit has already been exceeded. | `X-RateLimit-Reset: 0.755` |
+| `Retry-After` | Like `X-RateLimit-Reset`, but measured in seconds (rounded up). Preserved for backwards compatibility, and only included if the rate limit has already been exceeded. | `Retry-After: 2` |
+
+Note: rate limit headers are not standardized, except for `Retry-After`.
+Header meanings and usage have been devised by adapting common patterns to work with a leaky bucket rate limit model.
+
+## Performance
+
+SkRateLimiterService makes between 1 and 4 redis transactions per rate limit check.
+The first call is read-only, while the others perform at least one write operation.
+Two integer keys are stored per client/subject, and both expire together after the maximum duration of the limit.
+While performance has not been formally tested, it's expected that SkRateLimiterService has an impact roughly on par with the legacy RateLimiterService.
+Redis memory usage should be notably lower due to the reduced number of keys and avoidance of set / array constructions.
+
+## Concurrency and Multi-Node Correctness
+
+To provide consistency across multi-node environments, leaky bucket is implemented with only atomic operations (`Increment`, `Decrement`, `Add`, and `Subtract`).
+This allows the use of Optimistic Locking with read-modify-check logic.
+If a data conflict is detected during the "drip" phase, then it's safely reverted by executing its inverse (`Increment` <-> `Decrement`, `Add` <-> `Subtract`).
+We don't need to check for conflicts when adding the current request to the bucket, as all other logic already accounts for the case where the bucket has been "overfilled".
+Should an extra request slip through, the limit delay will be extended until the bucket size is back within limits.
+
+There is one non-atomic `Set` operation used to populate the initial Timestamp value, but we can safely ignore data races there.
+Any possible conflict would have to occur within a few-milliseconds window, which means that the final value can be no more than a few milliseconds off from the expected value.
+This error does not compound, as all further operations are relative (Increment and Add).
+Thus, it's considered an acceptable tradeoff given the limitations imposed by Redis and ioredis.
+
+## Algorithm Pseudocode
+
+The Atomic Leaky Bucket algorithm is described here, in pseudocode:
+
+```
+# Terms
+# * Now - UNIX timestamp of the current moment
+# * Bucket Size - Maximum number of requests allowed in the bucket
+# * Counter - Number of requests in the bucket
+# * Drip Rate - How often to decrement counter
+# * Drip Size - How much to decrement the counter
+# * Timestamp - UNIX timestamp of last bucket drip
+# * Delta Counter - Difference between current and expected counter value
+# * Delta Timestamp - Difference between current and expected timestamp value
+
+# 0 - Calculations
+dripRate = ceil(limit.dripRate ?? 1000);
+dripSize = ceil(limit.dripSize ?? 1);
+bucketSize = max(ceil(limit.size / factor), 1);
+maxExpiration = max(ceil((dripRate * ceil(bucketSize / dripSize)) / 1000), 1);;
+
+# 1 - Read
+MULTI
+ GET 'counter' INTO counter
+ GET 'timestamp' INTO timestamp
+EXEC
+
+# 2 - Drip
+if (counter > 0) {
+ # Deltas
+ deltaCounter = floor((now - timestamp) / dripRate) * dripSize;
+ deltaCounter = min(deltaCounter, counter);
+ deltaTimestamp = deltaCounter * dripRate;
+ if (deltaCounter > 0) {
+ # Update
+ expectedTimestamp = timestamp
+ MULTI
+ GET 'timestamp' INTO canaryTimestamp
+ INCRBY 'timestamp' deltaTimestamp
+ EXPIRE 'timestamp' maxExpiration
+ GET 'timestamp' INTO timestamp
+ DECRBY 'counter' deltaCounter
+ EXPIRE 'counter' maxExpiration
+ GET 'counter' INTO counter
+ EXEC
+ # Rollback
+ if (canaryTimestamp != expectedTimestamp) {
+ MULTI
+ DECRBY 'timestamp' deltaTimestamp
+ GET 'timestamp' INTO timestmamp
+ INCRBY 'counter' deltaCounter
+ GET 'counter' INTO counter
+ EXEC
+ }
+ }
+}
+
+# 3 - Check
+blocked = counter >= bucketSize
+if (!blocked) {
+ if (timestamp == 0) {
+ # Edge case - set the initial value for timestamp.
+ # Otherwise the first request will immediately drip away.
+ MULTI
+ SET 'timestamp', now
+ EXPIRE 'timestamp' maxExpiration
+ INCR 'counter'
+ EXPIRE 'counter' maxExpiration
+ GET 'counter' INTO counter
+ EXEC
+ } else {
+ MULTI
+ INCR 'counter'
+ EXPIRE 'counter' maxExpiration
+ GET 'counter' INTO counter
+ EXEC
+ }
+}
+
+# 4 - Handle
+if (blocked) {
+ # Application-specific code goes here.
+ # At this point blocked, counter, and timestamp are all accurate and synced to redis.
+ # Caller can apply limits, calculate headers, log audit failure, or anything else.
+}
+```
+
+## Notes, Resources, and Further Reading
+
+* https://en.wikipedia.org/wiki/Leaky_bucket#As_a_meter
+* https://ietf-wg-httpapi.github.io/ratelimit-headers/darrelmiller-policyname/draft-ietf-httpapi-ratelimit-headers.txt
+* https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Retry-After
+* https://stackoverflow.com/a/16022625
diff --git a/packages/backend/src/server/api/SkRateLimiterService.ts b/packages/backend/src/server/api/SkRateLimiterService.ts
index 6415ee905c..38c97b63df 100644
--- a/packages/backend/src/server/api/SkRateLimiterService.ts
+++ b/packages/backend/src/server/api/SkRateLimiterService.ts
@@ -5,16 +5,13 @@
import { Inject, Injectable } from '@nestjs/common';
import Redis from 'ioredis';
-import { LoggerService } from '@/core/LoggerService.js';
import { TimeService } from '@/core/TimeService.js';
import { EnvService } from '@/core/EnvService.js';
+import { BucketRateLimit, LegacyRateLimit, LimitInfo, RateLimit, hasMinLimit, isLegacyRateLimit, Keyed, hasMaxLimit, disabledLimitInfo, MaxLegacyLimit, MinLegacyLimit } from '@/misc/rate-limit-utils.js';
import { DI } from '@/di-symbols.js';
-import type Logger from '@/logger.js';
-import { BucketRateLimit, LegacyRateLimit, LimitInfo, RateLimit, hasMinLimit, isLegacyRateLimit, Keyed } from '@/misc/rate-limit-utils.js';
@Injectable()
export class SkRateLimiterService {
- private readonly logger: Logger;
private readonly disabled: boolean;
constructor(
@@ -24,32 +21,31 @@ export class SkRateLimiterService {
@Inject(DI.redis)
private readonly redisClient: Redis.Redis,
- @Inject(LoggerService)
- loggerService: LoggerService,
-
@Inject(EnvService)
envService: EnvService,
) {
- this.logger = loggerService.getLogger('limiter');
- this.disabled = envService.env.NODE_ENV !== 'production'; // TODO disable in TEST *only*
+ this.disabled = envService.env.NODE_ENV === 'test';
}
+ /**
+ * Check & increment a rate limit
+ * @param limit The limit definition
+ * @param actor Client who is calling this limit
+ * @param factor Scaling factor - smaller = larger limit (less restrictive)
+ */
public async limit(limit: Keyed<RateLimit>, actor: string, factor = 1): Promise<LimitInfo> {
if (this.disabled || factor === 0) {
- return {
- blocked: false,
- remaining: Number.MAX_SAFE_INTEGER,
- resetSec: 0,
- resetMs: 0,
- fullResetSec: 0,
- fullResetMs: 0,
- };
+ return disabledLimitInfo;
}
if (factor < 0) {
throw new Error(`Rate limit factor is zero or negative: ${factor}`);
}
+ return await this.tryLimit(limit, actor, factor);
+ }
+
+ private async tryLimit(limit: Keyed<RateLimit>, actor: string, factor: number): Promise<LimitInfo> {
if (isLegacyRateLimit(limit)) {
return await this.limitLegacy(limit, actor, factor);
} else {
@@ -58,141 +54,200 @@ export class SkRateLimiterService {
}
private async limitLegacy(limit: Keyed<LegacyRateLimit>, actor: string, factor: number): Promise<LimitInfo> {
- const promises: Promise<LimitInfo | null>[] = [];
-
- // The "min" limit - if present - is handled directly.
- if (hasMinLimit(limit)) {
- promises.push(
- this.limitMin(limit, actor, factor),
- );
- }
-
- // Convert the "max" limit into a leaky bucket with 1 drip / second rate.
- if (limit.max != null && limit.duration != null) {
- promises.push(
- this.limitBucket({
- type: 'bucket',
- key: limit.key,
- size: limit.max,
- dripRate: Math.max(Math.round(limit.duration / limit.max), 1),
- }, actor, factor),
- );
+ if (hasMaxLimit(limit)) {
+ return await this.limitLegacyMinMax(limit, actor, factor);
+ } else if (hasMinLimit(limit)) {
+ return await this.limitLegacyMinOnly(limit, actor, factor);
+ } else {
+ return disabledLimitInfo;
}
-
- const [lim1, lim2] = await Promise.all(promises);
- return {
- blocked: (lim1?.blocked || lim2?.blocked) ?? false,
- remaining: Math.min(lim1?.remaining ?? Number.MAX_SAFE_INTEGER, lim2?.remaining ?? Number.MAX_SAFE_INTEGER),
- resetSec: Math.max(lim1?.resetSec ?? 0, lim2?.resetSec ?? 0),
- resetMs: Math.max(lim1?.resetMs ?? 0, lim2?.resetMs ?? 0),
- fullResetSec: Math.max(lim1?.fullResetSec ?? 0, lim2?.fullResetSec ?? 0),
- fullResetMs: Math.max(lim1?.fullResetMs ?? 0, lim2?.fullResetMs ?? 0),
- };
}
- private async limitMin(limit: Keyed<LegacyRateLimit> & { minInterval: number }, actor: string, factor: number): Promise<LimitInfo | null> {
- if (limit.minInterval === 0) return null;
- if (limit.minInterval < 0) throw new Error(`Invalid rate limit ${limit.key}: minInterval is negative (${limit.minInterval})`);
+ private async limitLegacyMinMax(limit: Keyed<MaxLegacyLimit>, actor: string, factor: number): Promise<LimitInfo> {
+ if (limit.duration === 0) return disabledLimitInfo;
+ if (limit.duration < 0) throw new Error(`Invalid rate limit ${limit.key}: duration is negative (${limit.duration})`);
+ if (limit.max < 1) throw new Error(`Invalid rate limit ${limit.key}: max is less than 1 (${limit.max})`);
- const counter = await this.getLimitCounter(limit, actor, 'min');
- const minInterval = Math.max(Math.ceil(limit.minInterval * factor), 0);
+ // Derive initial dripRate from minInterval OR duration/max.
+ const initialDripRate = Math.max(limit.minInterval ?? Math.round(limit.duration / limit.max), 1);
- // Update expiration
- if (counter.c > 0) {
- const isCleared = this.timeService.now - counter.t >= minInterval;
- if (isCleared) {
- counter.c = 0;
- }
- }
+ // Calculate dripSize to reach max at exactly duration
+ const dripSize = Math.max(Math.round(limit.max / (limit.duration / initialDripRate)), 1);
- const blocked = counter.c > 0;
- if (!blocked) {
- counter.c++;
- counter.t = this.timeService.now;
- }
+ // Calculate final dripRate from dripSize and duration/max
+ const dripRate = Math.max(Math.round(limit.duration / (limit.max / dripSize)), 1);
- // Calculate limit status
- const resetMs = Math.max(Math.ceil(minInterval - (this.timeService.now - counter.t)), 0);
- const resetSec = Math.ceil(resetMs / 1000);
- const limitInfo: LimitInfo = { blocked, remaining: 0, resetSec, resetMs, fullResetSec: resetSec, fullResetMs: resetMs };
+ const bucketLimit: Keyed<BucketRateLimit> = {
+ type: 'bucket',
+ key: limit.key,
+ size: limit.max,
+ dripRate,
+ dripSize,
+ };
+ return await this.limitBucket(bucketLimit, actor, factor);
+ }
- // Update the limit counter, but not if blocked
- if (!blocked) {
- // Don't await, or we will slow down the API.
- this.setLimitCounter(limit, actor, counter, resetSec, 'min')
- .catch(err => this.logger.error(`Failed to update limit ${limit.key}:min for ${actor}:`, err));
- }
+ private async limitLegacyMinOnly(limit: Keyed<MinLegacyLimit>, actor: string, factor: number): Promise<LimitInfo> {
+ if (limit.minInterval === 0) return disabledLimitInfo;
+ if (limit.minInterval < 0) throw new Error(`Invalid rate limit ${limit.key}: minInterval is negative (${limit.minInterval})`);
- return limitInfo;
+ const dripRate = Math.max(Math.round(limit.minInterval), 1);
+ const bucketLimit: Keyed<BucketRateLimit> = {
+ type: 'bucket',
+ key: limit.key,
+ size: 1,
+ dripRate,
+ dripSize: 1,
+ };
+ return await this.limitBucket(bucketLimit, actor, factor);
}
+ /**
+ * Implementation of Leaky Bucket rate limiting - see SkRateLimiterService.md for details.
+ */
private async limitBucket(limit: Keyed<BucketRateLimit>, actor: string, factor: number): Promise<LimitInfo> {
if (limit.size < 1) throw new Error(`Invalid rate limit ${limit.key}: size is less than 1 (${limit.size})`);
if (limit.dripRate != null && limit.dripRate < 1) throw new Error(`Invalid rate limit ${limit.key}: dripRate is less than 1 (${limit.dripRate})`);
if (limit.dripSize != null && limit.dripSize < 1) throw new Error(`Invalid rate limit ${limit.key}: dripSize is less than 1 (${limit.dripSize})`);
- const counter = await this.getLimitCounter(limit, actor, 'bucket');
+ // 0 - Calculate
+ const now = this.timeService.now;
const bucketSize = Math.max(Math.ceil(limit.size / factor), 1);
const dripRate = Math.ceil(limit.dripRate ?? 1000);
const dripSize = Math.ceil(limit.dripSize ?? 1);
+ const expirationSec = Math.max(Math.ceil((dripRate * Math.ceil(bucketSize / dripSize)) / 1000), 1);
+
+ // 1 - Read
+ const counterKey = createLimitKey(limit, actor, 'c');
+ const timestampKey = createLimitKey(limit, actor, 't');
+ const counter = await this.getLimitCounter(counterKey, timestampKey);
+
+ // 2 - Drip
+ const dripsSinceLastTick = Math.floor((now - counter.timestamp) / dripRate) * dripSize;
+ const deltaCounter = Math.min(dripsSinceLastTick, counter.counter);
+ const deltaTimestamp = dripsSinceLastTick * dripRate;
+ if (deltaCounter > 0) {
+ // Execute the next drip(s)
+ const results = await this.executeRedisMulti(
+ ['get', timestampKey],
+ ['incrby', timestampKey, deltaTimestamp],
+ ['expire', timestampKey, expirationSec],
+ ['get', timestampKey],
+ ['decrby', counterKey, deltaCounter],
+ ['expire', counterKey, expirationSec],
+ ['get', counterKey],
+ );
+ const expectedTimestamp = counter.timestamp;
+ const canaryTimestamp = results[0] ? parseInt(results[0]) : 0;
+ counter.timestamp = results[3] ? parseInt(results[3]) : 0;
+ counter.counter = results[6] ? parseInt(results[6]) : 0;
- // Update drips
- if (counter.c > 0) {
- const dripsSinceLastTick = Math.floor((this.timeService.now - counter.t) / dripRate) * dripSize;
- counter.c = Math.max(counter.c - dripsSinceLastTick, 0);
+ // Check for a data collision and rollback
+ if (canaryTimestamp !== expectedTimestamp) {
+ const rollbackResults = await this.executeRedisMulti(
+ ['decrby', timestampKey, deltaTimestamp],
+ ['get', timestampKey],
+ ['incrby', counterKey, deltaCounter],
+ ['get', counterKey],
+ );
+ counter.timestamp = rollbackResults[1] ? parseInt(rollbackResults[1]) : 0;
+ counter.counter = rollbackResults[3] ? parseInt(rollbackResults[3]) : 0;
+ }
}
- const blocked = counter.c >= bucketSize;
+ // 3 - Check
+ const blocked = counter.counter >= bucketSize;
if (!blocked) {
- counter.c++;
- counter.t = this.timeService.now;
+ if (counter.timestamp === 0) {
+ const results = await this.executeRedisMulti(
+ ['set', timestampKey, now],
+ ['expire', timestampKey, expirationSec],
+ ['incr', counterKey],
+ ['expire', counterKey, expirationSec],
+ ['get', counterKey],
+ );
+ counter.timestamp = now;
+ counter.counter = results[4] ? parseInt(results[4]) : 0;
+ } else {
+ const results = await this.executeRedisMulti(
+ ['incr', counterKey],
+ ['expire', counterKey, expirationSec],
+ ['get', counterKey],
+ );
+ counter.counter = results[2] ? parseInt(results[2]) : 0;
+ }
}
+ // Calculate how much time is needed to free up a bucket slot
+ const overflow = Math.max((counter.counter + 1) - bucketSize, 0);
+ const dripsNeeded = Math.ceil(overflow / dripSize);
+ const timeNeeded = Math.max((dripRate * dripsNeeded) - (this.timeService.now - counter.timestamp), 0);
+
// Calculate limit status
- const remaining = Math.max(bucketSize - counter.c, 0);
- const resetMs = remaining > 0 ? 0 : Math.max(dripRate - (this.timeService.now - counter.t), 0);
+ const remaining = Math.max(bucketSize - counter.counter, 0);
+ const resetMs = timeNeeded;
const resetSec = Math.ceil(resetMs / 1000);
- const fullResetMs = Math.ceil(counter.c / dripSize) * dripRate;
+ const fullResetMs = Math.ceil(counter.counter / dripSize) * dripRate;
const fullResetSec = Math.ceil(fullResetMs / 1000);
- const limitInfo: LimitInfo = { blocked, remaining, resetSec, resetMs, fullResetSec, fullResetMs };
+ return { blocked, remaining, resetSec, resetMs, fullResetSec, fullResetMs };
+ }
- // Update the limit counter, but not if blocked
- if (!blocked) {
- // Don't await, or we will slow down the API.
- this.setLimitCounter(limit, actor, counter, fullResetSec, 'bucket')
- .catch(err => this.logger.error(`Failed to update limit ${limit.key} for ${actor}:`, err));
- }
+ private async getLimitCounter(counterKey: string, timestampKey: string): Promise<LimitCounter> {
+ const [counter, timestamp] = await this.executeRedisMulti(
+ ['get', counterKey],
+ ['get', timestampKey],
+ );
- return limitInfo;
+ return {
+ counter: counter ? parseInt(counter) : 0,
+ timestamp: timestamp ? parseInt(timestamp) : 0,
+ };
}
- private async getLimitCounter(limit: Keyed<RateLimit>, actor: string, subject: string): Promise<LimitCounter> {
- const key = createLimitKey(limit, actor, subject);
+ private async executeRedisMulti(...batch: RedisCommand[]): Promise<RedisResult[]> {
+ const results = await this.redisClient.multi(batch).exec();
- const value = await this.redisClient.get(key);
- if (value == null) {
- return { t: 0, c: 0 };
+ // Transaction conflict (retryable)
+ if (!results) {
+ throw new ConflictError('Redis error: transaction conflict');
}
- return JSON.parse(value);
- }
+ // Transaction failed (fatal)
+ if (results.length !== batch.length) {
+ throw new Error('Redis error: failed to execute batch');
+ }
- private async setLimitCounter(limit: Keyed<RateLimit>, actor: string, counter: LimitCounter, expiration: number, subject: string): Promise<void> {
- const key = createLimitKey(limit, actor, subject);
- const value = JSON.stringify(counter);
- const expirationSec = Math.max(expiration, 1);
- await this.redisClient.set(key, value, 'EX', expirationSec);
+ // Map responses
+ const errors: Error[] = [];
+ const responses: RedisResult[] = [];
+ for (const [error, response] of results) {
+ if (error) errors.push(error);
+ responses.push(response as RedisResult);
+ }
+
+ // Command failed (fatal)
+ if (errors.length > 0) {
+ const errorMessages = errors
+ .map((e, i) => `Error in command ${i}: ${e}`)
+ .join('\', \'');
+ throw new AggregateError(errors, `Redis error: failed to execute command(s): '${errorMessages}'`);
+ }
+
+ return responses;
}
}
-function createLimitKey(limit: Keyed<RateLimit>, actor: string, subject: string): string {
- return `rl_${actor}_${limit.key}_${subject}`;
+// Not correct, but good enough for the basic commands we use.
+type RedisResult = string | null;
+type RedisCommand = [command: string, ...args: unknown[]];
+
+function createLimitKey(limit: Keyed<RateLimit>, actor: string, value: string): string {
+ return `rl_${actor}_${limit.key}_${value}`;
}
-export interface LimitCounter {
- /** Timestamp */
- t: number;
+class ConflictError extends Error {}
- /** Counter */
- c: number;
+interface LimitCounter {
+ timestamp: number;
+ counter: number;
}
diff --git a/packages/backend/src/server/api/endpoints/charts/active-users.ts b/packages/backend/src/server/api/endpoints/charts/active-users.ts
index f6c0c045df..dcdcf46d0b 100644
--- a/packages/backend/src/server/api/endpoints/charts/active-users.ts
+++ b/packages/backend/src/server/api/endpoints/charts/active-users.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/ap-request.ts b/packages/backend/src/server/api/endpoints/charts/ap-request.ts
index 4c5c0d5d20..28c64229e7 100644
--- a/packages/backend/src/server/api/endpoints/charts/ap-request.ts
+++ b/packages/backend/src/server/api/endpoints/charts/ap-request.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/drive.ts b/packages/backend/src/server/api/endpoints/charts/drive.ts
index 8210ec8fe7..69ff3c5d7a 100644
--- a/packages/backend/src/server/api/endpoints/charts/drive.ts
+++ b/packages/backend/src/server/api/endpoints/charts/drive.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/federation.ts b/packages/backend/src/server/api/endpoints/charts/federation.ts
index 56a5dbea31..bd870cc3d9 100644
--- a/packages/backend/src/server/api/endpoints/charts/federation.ts
+++ b/packages/backend/src/server/api/endpoints/charts/federation.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/instance.ts b/packages/backend/src/server/api/endpoints/charts/instance.ts
index 7f79e1356d..765bf024ee 100644
--- a/packages/backend/src/server/api/endpoints/charts/instance.ts
+++ b/packages/backend/src/server/api/endpoints/charts/instance.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/notes.ts b/packages/backend/src/server/api/endpoints/charts/notes.ts
index b3660b558b..ecac436311 100644
--- a/packages/backend/src/server/api/endpoints/charts/notes.ts
+++ b/packages/backend/src/server/api/endpoints/charts/notes.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/user/drive.ts b/packages/backend/src/server/api/endpoints/charts/user/drive.ts
index 716c41f385..98ec40ade2 100644
--- a/packages/backend/src/server/api/endpoints/charts/user/drive.ts
+++ b/packages/backend/src/server/api/endpoints/charts/user/drive.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/user/following.ts b/packages/backend/src/server/api/endpoints/charts/user/following.ts
index b67b5ca338..cb3dd36bab 100644
--- a/packages/backend/src/server/api/endpoints/charts/user/following.ts
+++ b/packages/backend/src/server/api/endpoints/charts/user/following.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/user/notes.ts b/packages/backend/src/server/api/endpoints/charts/user/notes.ts
index e5587cab86..0742a21210 100644
--- a/packages/backend/src/server/api/endpoints/charts/user/notes.ts
+++ b/packages/backend/src/server/api/endpoints/charts/user/notes.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/user/pv.ts b/packages/backend/src/server/api/endpoints/charts/user/pv.ts
index cbae3a21c1..a220381b00 100644
--- a/packages/backend/src/server/api/endpoints/charts/user/pv.ts
+++ b/packages/backend/src/server/api/endpoints/charts/user/pv.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/user/reactions.ts b/packages/backend/src/server/api/endpoints/charts/user/reactions.ts
index d734240742..3bb33622c2 100644
--- a/packages/backend/src/server/api/endpoints/charts/user/reactions.ts
+++ b/packages/backend/src/server/api/endpoints/charts/user/reactions.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;
diff --git a/packages/backend/src/server/api/endpoints/charts/users.ts b/packages/backend/src/server/api/endpoints/charts/users.ts
index 6e1a8ebd4f..b5452517ab 100644
--- a/packages/backend/src/server/api/endpoints/charts/users.ts
+++ b/packages/backend/src/server/api/endpoints/charts/users.ts
@@ -17,10 +17,11 @@ export const meta = {
allowGet: true,
cacheSec: 60 * 60,
- // 10 calls per 5 seconds
+ // Burst up to 100, then 2/sec average
limit: {
- duration: 1000 * 5,
- max: 10,
+ type: 'bucket',
+ size: 100,
+ dripRate: 500,
},
} as const;