- Above all, however, we must take account of the serious mistakes committed in recent years.
- Framför allt måste vi dock ta hänsyn till de allvarliga misstag som har begåtts de senaste åren.
- I think, on the contrary, that we have made a serious mistake.
- Jag tror att vi annars skulle begå ett allvarligt misstag.
- I still believe that our serious mistake was precisely cancelling the sitting of the equal ACP-European Union assembly.
- Jag står fast vid min åsikt att det var ett allvarligt misstag att lämna återbud till sammanträdet för AVS-EU:s gemensamma församling.
- President Barroso is obviously completely on his own in this campaign and I believe that this is a very serious mistake for which I believe that your group is chiefly responsible:
- Ordförande Barroso står uppenbarligen helt ensam i denna kampanj, och detta är enligt min uppfattning ett allvarligt misstag som jag anser er grupp vara huvudsakligen ansvarig för:
- cutting our links with the United States in the fight against terrorism would be an extremely serious mistake, and would cause very severe damage to the population of the European Union as a whole, but there should also be links with the moderate Arab countries, including those in Al-Qaida’s sights.
- Att bryta banden med Förenta staterna i kampen mot terrorismen vore ett oerhört allvarligt misstag som skulle orsaka hela EU:s befolkning mycket svår skada.
show query
SET search_path TO f9miniensv;
WITH
list AS (SELECT
t11.token_id AS t11,
t12.token_id AS t12,
t21.token_id AS t21,
t22.token_id AS t22,
r1.dep_id AS dep1,
r2.dep_id AS dep2
FROM
deprel r1
JOIN depstr s1 ON s1.dep_id = r1.dep_id
JOIN word_align a1 ON a1.wsource = r1.head AND a1.wsource < a1.wtarget
JOIN word_align a2 ON a2.wsource = r1.dependent
JOIN deprel r2 ON r2.head = a1.wtarget AND r2.dependent = a2.wtarget
JOIN depstr s2 ON s2.dep_id = r2.dep_id
JOIN token t11 ON t11.token_id = r1.head
JOIN token t21 ON t21.token_id = r2.head
JOIN token t12 ON t12.token_id = r1.dependent
JOIN token t22 ON t22.token_id = r2.dependent
WHERE
s1.val = 'amod' AND
s2.val = 'AT' AND
t11.ctag = 'NOUN' AND
t21.ctag = 'NOUN' AND
t12.ctag = 'ADJ' AND
t22.ctag = 'ADJ' AND
t11.lemma_id = 9846 AND
t12.lemma_id = 33635 AND
t21.lemma_id = 50510 AND
t22.lemma_id = 40744),
stats AS (SELECT
sentence_id,
count(DISTINCT token_id) AS c,
count(*) AS c_aligned,
count(DISTINCT wtarget) AS c_target
FROM
token
LEFT JOIN word_align ON wsource = token_id
WHERE
sentence_id IN (
SELECT sentence_id
FROM
list
JOIN token ON token_id IN(t11, t21)
)
GROUP BY sentence_id),
numbered AS (SELECT row_number() OVER () AS i, *
FROM
list),
sentences AS (SELECT *, .2 * (1 / (1 + exp(max(c) OVER (PARTITION BY i) - min(c) OVER (PARTITION BY i)))) +
.8 * (1 / log(avg(c) OVER (PARTITION BY i))) AS w
FROM
(
SELECT i, 1 AS n, sentence_id, ARRAY[t11,t12] AS tokens
FROM
numbered
JOIN token ON token_id = t11
UNION SELECT i, 2 AS n, sentence_id, ARRAY[t21,t22] AS tokens
FROM
numbered
JOIN token ON token_id = t21
) x
JOIN stats USING (sentence_id)
ORDER BY i, n)
SELECT
i,
n,
w,
c,
c_aligned,
c_target,
sentence_id,
string_agg(CASE WHEN lpad THEN ' ' ELSE '' END || '<span class="token' ||
CASE WHEN ARRAY[token_id] <@ tokens THEN ' hl' ELSE '' END || '">' || val || '</span>',
'' ORDER BY token_id ASC) AS s
FROM
sentences
JOIN token USING (sentence_id)
JOIN typestr USING (type_id)
GROUP BY i, n, w, c, c_aligned, c_target, sentence_id
ORDER BY w DESC, i, n;
;