Problem A statement

[string, greedy]

https://codeforces.com/contest/1626/problem/A

Solution

We just need to calculate all letters and put ones with frequency one in the middle and other in the start and end.

Complexity

It is O(m^2), where m is size of alphabet, it can be easily made O(m).

Code

T = int(input())
for _ in range(T):
    s = input().rstrip()
    pref = ""
    mid = ""
    for x in "abcdefghijklmnopqrstuvwxyz":
        if s.count(x) == 2:
            pref += x
        elif s.count(x) == 1:
            mid += x
    print(pref + mid + pref)

Problem B statement

[greedy, math]

https://codeforces.com/contest/1626/problem/B

Solution

We need to deal with two types of compression: first one with 2 digits answer, like 57, 89 and another with one digit, like 34, 10.

  1. If we have 2 digits compressions, we need to choose the rightest one.
  2. If we have only 1 digits compressions, we need to choose the leftest one.

Complexity

It is O(n) for time and space.

Code

T = int(input())
for _ in range(T):
    s = input().rstrip()
    n, mx, d1, d2 = len(s), 0, [], []
 
    for i in range(n-1):
        sm = int(s[i]) + int(s[i+1])
        mx = max(mx, sm)
        if sm < 10:
            d1 += [i]
        else:
            d2 += [i]
 
    t = d2[-1] if mx >= 10 else d1[0]
    print(s[:t] + str(int(s[t]) + int(s[t+1])) + s[t+2:])

Problem C statement

[itervals, sort]

https://codeforces.com/contest/1626/problem/C

Solution

Actually this problem is about intervals. Imagine that some k = 100 and h = 10. Then we need to start to cast spell in moment of time k - h + 1 = 91 till 100, so we have interval [91, 100]. What happens if two intervals intersect? We need to merge them, there is nothing else we can do, because we allowed to start only from 1. So, create all interval, then merge them and then calculate cost for each of them.

Complexity

Time complexity is O(n log n), where n is number of monsters.

Code

from collections import Counter
I = lambda: [int(x) for x in input().split()]
 
 
def merge(intervals):
    ans = []
 
    for beg, end in sorted(intervals):
        if not ans or ans[-1][1] < beg:
            ans += [[beg, end]]
        else:
            ans[-1][1] = max(ans[-1][1], end)
 
    return ans
 
T = int(input())
for _ in range(T):
    n = int(input())
    K = I()
    H = I()
    ints = [[k - h + 1, k] for k, h in zip(K, H)]
 
    merged = merge(ints)
    print(sum((y-x+2)*(y-x+1)//2 for x, y in merged))

Problem D statement

[binary search, greedy]

https://codeforces.com/contest/1626/problem/D

Solution

Let us focus on sizes of ligth and heavy groups. We have not a lot of choices for those groups: [1, 2, 4, ..., 2^18], call them sizes. Now, imagine, that we try to construct groups with pref = 8 and suff = 16, what we do? We want to make the first group as big as possible with size <= 8 and the last group as big as possible with size <= 16. For this we will use binary search. Also we calculate array cuts it is the places where we can cut.

Complexity

Time complexiyt is O(log^3n + n), space is O(n).

Code

from bisect import bisect, bisect_left
I = lambda: [int(x) for x in input().split()]
 
def f(x):
    if x == 0: return 1
    return (1<<((x-1).bit_length())) - x
 
T = int(input())
for _ in range(T):
    n = int(input())
    arr = sorted(I())
    cuts = [0] + [i for i in range(1, n) if arr[i] != arr[i-1]] + [n]  #can take -1?
    sizes = [1<<i for i in range(20)]
    ans = 2*n
 
    for pref in sizes:
        for suff in sizes:
            s1 = cuts[bisect(cuts, pref) - 1]
            s2 = n - cuts[bisect_left(cuts, n-suff)]
            s3 = n - s1 - s2
            ans = min(ans, f(s1) + f(s2) + f(s3))
 
    print(ans)

Problem E statement

[dfs, dp, greedy]

https://codeforces.com/contest/1626/problem/E

Solution

Notice, that all paths will have the following form:

  1. If we start to go down the tree, we will always continue go down.
  2. We can do several steps up, but then from some moment we will always go down.

First, let us deal with paths, which go down.

The key idea of this problem is the following: imagine we have node u and its children v. Then if we can reach black node from v and number of black nodes in subtree of v is >=2 then we can reach black node from u as well. Indeed, we have at least 2 choices to which of these two nodes we move (in fact we move edge u -> v) and we can avoid repeting moving towards the same black node. However we will deal only with directions parent -> child in this way.

  1. So, first, we use preorder dfs where we update ans as well as cnt[u] is number of black nodes in subtree of u.
  2. Also we want to deal with directions child -> parent. For this we use postorder dfs, where we start to traverse from root and try to move up. We need to choose such order of traversal, that we visit parent of node before we visit node itself. We allowed to move up v -> u if tot - cnt[v] >= 2 and ans[u] = 1.
  3. Finally, in the beginning, we need to put all neibours of black nodes to our ans, because we can reach them in one step.

I think steps 1 and 2 can be merged somehow and make it more elegant, but I am not sure how.

Complexity

It is O(n) for time and O(n) for space. We also use bootstrap for recursion to avoid TLE.

Code

from collections import defaultdict, Counter
I = lambda: [int(x) for x in input().split()]
import io, os, sys
input = io.BytesIO(os.read(0,os.fstat(0).st_size)).readline

################################################
from types import GeneratorType
def bootstrap(f, stack=[]):
    def wrappedfunc(*args, **kwargs):
        if stack:
            return f(*args, **kwargs)
        else:
            to = f(*args, **kwargs)
            while True:
                if type(to) is GeneratorType:
                    stack.append(to)
                    to = next(to)
                else:
                    stack.pop()
                    if not stack:
                        break
                    to = stack[-1].send(to)
            return to
    return wrappedfunc
################################################


@bootstrap
def dfs1(u, p):
    cnt[u] = c[u]
    for v in G[u]:
        if v == p: continue
        yield dfs1(v, u)
        cnt[u] += cnt[v]
        if ans[v] and cnt[v] >= 2: ans[u] = 1
    yield


@bootstrap
def dfs2(u, p):
    for v in G[u]:
        if v == p: continue
        if ans[u] and tot - cnt[v] >= 2: ans[v] = 1
        yield dfs2(v, u)
    yield

n = int(input())
c = I()
cnt, ans = [0]*n, [0]*n
G = defaultdict(list)
tot = sum(c)
for i in range(n-1):
    u, v = I()
    G[u-1] += [v-1]
    G[v-1] += [u-1]

for i in range(n):
    if not c[i]: continue
    ans[i] = 1
    for v in G[i]: ans[v] = 1

dfs1(0, -1)
dfs2(0, -1)

print(" ".join(map(str, ans)) + "\n")

Problem F statement

[dp, math, combinatorics]

https://codeforces.com/contest/1626/problem/F

Solution

Nice and difficult problem. Let us first notice, that if we replace number x with number y = x % 720720, then we can easily recalculate answer. Why it is so? N = 720270 is number which can be divided by 1, 2, 3, ..., 16 without reminder and it means that when we perform some operations with x: x -= x % i for all i in range 1, 2, ..., 16 and then apply % 720702 it will be the same as we do this directy with y. Also instead of expectation we calculate it directly multiplied by n^k as asked. Imagine, that we have

[x1, x2, ..., xi, xi+1, ..., xn] and we changed it to [x1, x2, ..., yi = xi % N, xi+1, ..., xn]. What happen with our Expectation? There can be exaclty n^k options how we genereated random index. For each of these options we have k terms in sum. There will be n^{k-1} * k times we meet xi in this sum. So, when we go from xi to yi, we will loose n^{k-1} * k * (xi - yi), which we need to add.

No, let us look at numbers p0, ..., pN: this is how many of each number modulo N we have. Divided by n this numbers represent probabilities with which we can choose our numbers. (Imagine that N = 4 and we have 0, 1, 1, 2, 3, 3, 3, then we have probability 1/7, 2/7, 1/7, 3/7 to choose numbers 0, 1, 2, 3 if we sample uniformly). After each step probabilities will change. For every element p[j] with probability 1/n it becomes equal to p[j - j%i] and with probability (n-1)/n it stays the same. So, what we need to do is to recalculate all elements in p. Notice, that because we going from j = 0 to N-1, there will no be collistions. Because what we actually keep not probabilities but scaled ones, we need to evaluate expectation tot for level i and then multiply them by coefficient n^{k-i}.

Complexity

It is O(N*k) for time and O(N) for space. Notice that N = lcm(1, ..., k).

Code

M, N, res = 998244353, 720720, 0
I = lambda: [int(x) for x in input().split()]
p = [0] * N

n, a, x, y, k, U = I()
for i in range(1, n + 1):
    p[a % N] += 1
    res += a // N * N
    a = (a * x + y) % U

res = (res * pow(n, k - 1, M) * k) % M
for i in range(1, k + 1):
    tot = sum(p[j] * j for j in range(N)) % M
    for j in range(N):
        tmp = p[j]
        p[j] *= (n - 1)
        p[j] %= M
        p[j - j % i] += tmp
    res += (tot * pow(n, k - i, M)) % M

print(res % M)

Remark

I should say that this idea is about markov chains.