{"cells": [{"cell_type": "markdown", "metadata": {}, "source": ["# Algo - jeux de dictionnaires, plus grand suffixe commun\n", "\n", "Les [dictionnaires](http://www.xavierdupre.fr/app/teachpyx/helpsphinx/c_lang/types.html#dictionnaire) sont tr\u00e8s utilis\u00e9s pour associer des choses entre elles, surtout quand ces choses ne sont pas enti\u00e8res. Le notebook montre l'int\u00e9r\u00eat de perdre un peu de temps pour transformer les donn\u00e9es et rendre un calcul plus rapide."]}, {"cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [{"data": {"text/html": ["
run previous cell, wait for 2 seconds
\n", ""], "text/plain": [""]}, "execution_count": 2, "metadata": {}, "output_type": "execute_result"}], "source": ["from jyquickhelper import add_notebook_menu\n", "add_notebook_menu()"]}, {"cell_type": "markdown", "metadata": {}, "source": ["## Enonc\u00e9\n", "\n", "Le texte suivant est un po\u00e8me d'Arthur Rimbaud, Les Voyelles. On veut en extraire tous les mots."]}, {"cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": ["poeme = \"\"\"\n", "A noir, E blanc, I rouge, U vert, O bleu, voyelles,\n", "Je dirai quelque jour vos naissances latentes.\n", "A, noir corset velu des mouches \u00e9clatantes\n", "Qui bombillent autour des puanteurs cruelles,\n", "\n", "Golfe d'ombre; E, candeur des vapeurs et des tentes,\n", "Lance des glaciers fiers, rois blancs, frissons d'ombelles;\n", "I, pourpres, sang crach\u00e9, rire des l\u00e8vres belles\n", "Dans la col\u00e8re ou les ivresses p\u00e9nitentes;\n", "\n", "U, cycles, vibrements divins des mers virides,\n", "Paix des p\u00e2tis sem\u00e9s d'animaux, paix des rides\n", "Que l'alchimie imprime aux grands fronts studieux;\n", "\n", "O, supr\u00eame clairon plein de strideurs \u00e9tranges,\n", "Silences travers\u00e9s des Mondes et des Anges:\n", "\u2014O l'Om\u00e9ga, rayon violet de Ses Yeux!\n", "\"\"\""]}, {"cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["['a', 'noir', 'e', 'blanc', 'i', 'rouge', 'u', 'vert', 'o', 'bleu', 'voyelles', 'je', 'dirai', 'quelque', 'jour', 'vos', 'naissances', 'latentes', 'a', 'noir', 'corset', 'velu', 'des', 'mouches', '\u00e9clatantes', 'qui', 'bombillent', 'autour', 'des', 'puanteurs', 'cruelles', 'golfe', 'd', 'ombre', 'e', 'candeur', 'des', 'vapeurs', 'et', 'des', 'tentes', 'lance', 'des', 'glaciers', 'fiers', 'rois', 'blancs', 'frissons', 'd', 'ombelles', 'i', 'pourpres', 'sang', 'crach\u00e9', 'rire', 'des', 'l\u00e8vres', 'belles', 'dans', 'la', 'col\u00e8re', 'ou', 'les', 'ivresses', 'p\u00e9nitentes', 'u', 'cycles', 'vibrements', 'divins', 'des', 'mers', 'virides', 'paix', 'des', 'p\u00e2tis', 'sem\u00e9s', 'd', 'animaux', 'paix', 'des', 'rides', 'que', 'l', 'alchimie', 'imprime', 'aux', 'grands', 'fronts', 'studieux', 'o', 'supr\u00eame', 'clairon', 'plein', 'de', 'strideurs', '\u00e9tranges', 'silences', 'travers\u00e9s', 'des', 'mondes', 'et', 'des', 'anges', '\u2014o', 'l', 'om\u00e9ga', 'rayon', 'violet', 'de', 'ses', 'yeux']\n"]}], "source": ["def extract_words(text):\n", " # ce n'est pas la plus efficace des fonctions mais \u00e7a fait ce qu'on veut\n", " spl = text.lower().replace(\"!\", \"\").replace(\",\", \"\").replace(\n", " \";\", \"\").replace(\".\", \"\").replace(\":\", \"\").replace(\"'\", \" \").split()\n", " return(spl)\n", " \n", "print(extract_words(poeme))"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 1 : trouver les deux mots qui partagent le plus grand suffixe en commun"]}, {"cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 2 : constuire un dictionnaire qui associe \u00e0 chaque lettre tous les mots se terminant par celle-ci"]}, {"cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 3 : trouver les deux mots qui partagent le plus grand suffixe en commun en utilisant le dictionnaire pr\u00e9c\u00e9dent"]}, {"cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 4 : mesurer le temps pris par chaque fonction\n", "\n", "La fonction [perf_counter](https://docs.python.org/3/library/time.html#time.perf_counter) est parfaite pour \u00e7a."]}, {"cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 5 : expliquer pourquoi telle m\u00e9thode est plus rapide.\n", "\n", "La r\u00e9ponse devrait guider vers une m\u00e9thode encore plus rapide."]}, {"cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 6 : pousser l'id\u00e9e plus loin et construire un trie\n", "\n", "Indexer les mots par leur derni\u00e8re lettre permet d'aller plus vite. Il faut maintenant trouver le suffixe le plus long dans chaque sous-groupe de mots. Ce probl\u00e8me est identique au pr\u00e9c\u00e9dent sur tous les mots pr\u00e9c\u00e9dents auxquels la derni\u00e8re aurait \u00e9t\u00e9 \u00f4t\u00e9e. Comment exploiter cette id\u00e9e jusqu'au bout ?"]}, {"cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": []}, {"cell_type": "markdown", "metadata": {}, "source": ["## R\u00e9ponses"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 1 : trouver les deux mots qui partagent le plus grand suffixe en commun\n", "\n", "Ce n'est qu'une suggestion. La fonction repose sur trois boucles, la premi\u00e8re parcourt diff\u00e9rentes tailles de suffixe, les deux autres regardes toutes les paires de mots."]}, {"cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [{"data": {"text/plain": ["('tentes', ('latentes', 'tentes'))"]}, "execution_count": 11, "metadata": {}, "output_type": "execute_result"}], "source": ["def plus_grand_suffix_commun(mots):\n", " longueur_max = max([len(m) for m in mots])\n", " meilleure_paire = None\n", " meilleur_suffix = None\n", " # On peut parcourir les tailles de suffixe dans un sens croissant\n", " # mais c'est plus efficace dans un sens d\u00e9croissant dans la mesure\n", " # o\u00f9 le premier suffixe trouv\u00e9 est alors n\u00e9cessairement le plus long.\n", " for i in range(longueur_max - 1, 0, -1):\n", " for m1 in mots:\n", " for m2 in mots: # ici, on pourrait ne parcourir qu'une partie des mots\n", " # car m1,m2 ou m2,m1, c'est pareil.\n", " if m1 == m2:\n", " continue\n", " if len(m1) < i or len(m2) < i:\n", " continue\n", " suffixe = m1[-i:]\n", " if m2[-i:] == suffixe:\n", " meilleur_suffix = suffixe\n", " meilleure_paire = m1, m2\n", " return meilleur_suffix, meilleure_paire\n", " \n", "mots = extract_words(poeme)\n", "plus_grand_suffix_commun(mots)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 2 : constuire un dictionnaire qui associe \u00e0 chaque lettre tous les mots se terminant par celle-ci"]}, {"cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [{"data": {"text/plain": ["{'a': ['a', 'a', 'la', 'om\u00e9ga'],\n", " 'r': ['noir', 'jour', 'noir', 'autour', 'candeur'],\n", " 'e': ['e',\n", " 'rouge',\n", " 'je',\n", " 'quelque',\n", " 'golfe',\n", " 'ombre',\n", " 'e',\n", " 'lance',\n", " 'rire',\n", " 'col\u00e8re',\n", " 'que',\n", " 'alchimie',\n", " 'imprime',\n", " 'supr\u00eame',\n", " 'de',\n", " 'de'],\n", " 'c': ['blanc'],\n", " 'i': ['i', 'dirai', 'qui', 'i'],\n", " 'u': ['u', 'bleu', 'velu', 'ou', 'u'],\n", " 't': ['vert', 'corset', 'bombillent', 'et', 'et', 'violet'],\n", " 'o': ['o', 'o', '\u2014o'],\n", " 's': ['voyelles',\n", " 'vos',\n", " 'naissances',\n", " 'latentes',\n", " 'des',\n", " 'mouches',\n", " '\u00e9clatantes',\n", " 'des',\n", " 'puanteurs',\n", " 'cruelles',\n", " 'des',\n", " 'vapeurs',\n", " 'des',\n", " 'tentes',\n", " 'des',\n", " 'glaciers',\n", " 'fiers',\n", " 'rois',\n", " 'blancs',\n", " 'frissons',\n", " 'ombelles',\n", " 'pourpres',\n", " 'des',\n", " 'l\u00e8vres',\n", " 'belles',\n", " 'dans',\n", " 'les',\n", " 'ivresses',\n", " 'p\u00e9nitentes',\n", " 'cycles',\n", " 'vibrements',\n", " 'divins',\n", " 'des',\n", " 'mers',\n", " 'virides',\n", " 'des',\n", " 'p\u00e2tis',\n", " 'sem\u00e9s',\n", " 'des',\n", " 'rides',\n", " 'grands',\n", " 'fronts',\n", " 'strideurs',\n", " '\u00e9tranges',\n", " 'silences',\n", " 'travers\u00e9s',\n", " 'des',\n", " 'mondes',\n", " 'des',\n", " 'anges',\n", " 'ses'],\n", " 'd': ['d', 'd', 'd'],\n", " 'g': ['sang'],\n", " '\u00e9': ['crach\u00e9'],\n", " 'x': ['paix', 'animaux', 'paix', 'aux', 'studieux', 'yeux'],\n", " 'l': ['l', 'l'],\n", " 'n': ['clairon', 'plein', 'rayon']}"]}, "execution_count": 12, "metadata": {}, "output_type": "execute_result"}], "source": ["mots = extract_words(poeme)\n", "suffix_map = {}\n", "for mot in mots:\n", " lettre = mot[-1]\n", " if lettre in suffix_map:\n", " suffix_map[lettre].append(mot)\n", " else:\n", " suffix_map[lettre] = [mot]\n", "suffix_map"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 3 : trouver les deux mots qui partagent le plus grand suffixe en commun en utilisant le dictionnaire pr\u00e9c\u00e9dent\n", "\n", "On reprend les deux ingr\u00e9dients."]}, {"cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [{"data": {"text/plain": ["(6, 'tentes', ('latentes', 'tentes'))"]}, "execution_count": 13, "metadata": {}, "output_type": "execute_result"}], "source": ["def plus_grand_suffix_commun_dictionnaire(mots):\n", " suffix_map = {}\n", " for mot in mots:\n", " lettre = mot[-1]\n", " if lettre in suffix_map:\n", " suffix_map[lettre].append(mot)\n", " else:\n", " suffix_map[lettre] = [mot]\n", "\n", " tout = []\n", " for cle, valeur in suffix_map.items():\n", " suffix = plus_grand_suffix_commun(valeur)\n", " if suffix is None:\n", " continue\n", " tout.append((len(suffix[0]), suffix[0], suffix[1]))\n", " return max(tout)\n", "\n", "mots = extract_words(poeme)\n", "plus_grand_suffix_commun_dictionnaire(mots)"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 4 : mesurer le temps pris par chaque fonction"]}, {"cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [{"data": {"text/plain": ["0.4579025000000172"]}, "execution_count": 14, "metadata": {}, "output_type": "execute_result"}], "source": ["from time import perf_counter\n", "\n", "mots = extract_words(poeme)\n", "\n", "debut = perf_counter()\n", "for i in range(100):\n", " plus_grand_suffix_commun(mots)\n", "perf_counter() - debut"]}, {"cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [{"data": {"text/plain": ["0.15045649999999"]}, "execution_count": 15, "metadata": {}, "output_type": "execute_result"}], "source": ["debut = perf_counter()\n", "for i in range(100):\n", " plus_grand_suffix_commun_dictionnaire(mots)\n", "perf_counter() - debut"]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 5 : expliquer pourquoi telle m\u00e9thode est plus rapide.\n", "\n", "La seconde m\u00e9thode est deux \u00e0 trois fois plus rapide. Cela d\u00e9pend du nombre de mots qu'on note *N*. Si on note *L* la longueur du plus grand mot, la premi\u00e8re m\u00e9thode a pour co\u00fbt $O(LN^2)$. La seconde est une succession de deux \u00e9tapes. La premi\u00e8re \u00e9tape construit un dictionnaire en parcourant une seule fois la liste des mots. Son co\u00fbt est $O(N)$. La seconde utilise la premi\u00e8re m\u00e9thode mais sur des ensembles plus petits. Plus exactements, si $N_x$ est le nombre de mots se terminant pas $x$, alors le co\u00fbt de la m\u00e9thode est $O(L \\sum_x N_x^2)$ avec $\\sum_x N_x = N$. Il faut donc comparer $O(LN^2)$ \u00e0 $O(N) + O(L \\sum_x N_x^2)$. Le second co\u00fbt est plus petit."]}, {"cell_type": "markdown", "metadata": {}, "source": ["### Exercice 6 : pousser l'id\u00e9e plus loin et construire un trie\n", "\n", "Un [trie](https://fr.wikipedia.org/wiki/Trie_(informatique)) est une structure de donn\u00e9es permettant de trouver rapidement tous les mots partageant le m\u00eame pr\u00e9fixe ou suffixe."]}, {"cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [{"data": {"text/plain": ["{'c': {'b': {'a': {'z': {'FIN': 0}, 'FIN': 0}}}}"]}, "execution_count": 16, "metadata": {}, "output_type": "execute_result"}], "source": ["def build_trie(liste):\n", " trie = {} \n", " for mot in liste:\n", " noeud = trie\n", " for i in range(0, len(mot)):\n", " lettre = mot[len(mot) - i - 1]\n", " if lettre not in noeud:\n", " noeud[lettre] = {}\n", " noeud = noeud[lettre]\n", " noeud['FIN'] = 0\n", " return trie\n", "\n", "liste = ['zabc', 'abc']\n", "t = build_trie(liste)\n", "t"]}, {"cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [{"data": {"text/plain": ["{'a': {'FIN': 0, 'l': {'FIN': 0}, 'g': {'\u00e9': {'m': {'o': {'FIN': 0}}}}},\n", " 'r': {'i': {'o': {'n': {'FIN': 0}}},\n", " 'u': {'o': {'j': {'FIN': 0}, 't': {'u': {'a': {'FIN': 0}}}},\n", " 'e': {'d': {'n': {'a': {'c': {'FIN': 0}}}}}}},\n", " 'e': {'FIN': 0,\n", " 'g': {'u': {'o': {'r': {'FIN': 0}}}},\n", " 'j': {'FIN': 0},\n", " 'u': {'q': {'l': {'e': {'u': {'q': {'FIN': 0}}}}, 'FIN': 0}},\n", " 'f': {'l': {'o': {'g': {'FIN': 0}}}},\n", " 'r': {'b': {'m': {'o': {'FIN': 0}}},\n", " 'i': {'r': {'FIN': 0}},\n", " '\u00e8': {'l': {'o': {'c': {'FIN': 0}}}}},\n", " 'c': {'n': {'a': {'l': {'FIN': 0}}}},\n", " 'i': {'m': {'i': {'h': {'c': {'l': {'a': {'FIN': 0}}}}}}},\n", " 'm': {'i': {'r': {'p': {'m': {'i': {'FIN': 0}}}}},\n", " '\u00ea': {'r': {'p': {'u': {'s': {'FIN': 0}}}}}},\n", " 'd': {'FIN': 0}},\n", " 'c': {'n': {'a': {'l': {'b': {'FIN': 0}}}}},\n", " 'i': {'FIN': 0, 'a': {'r': {'i': {'d': {'FIN': 0}}}}, 'u': {'q': {'FIN': 0}}},\n", " 'u': {'FIN': 0,\n", " 'e': {'l': {'b': {'FIN': 0}}},\n", " 'l': {'e': {'v': {'FIN': 0}}},\n", " 'o': {'FIN': 0}},\n", " 't': {'r': {'e': {'v': {'FIN': 0}}},\n", " 'e': {'s': {'r': {'o': {'c': {'FIN': 0}}}},\n", " 'FIN': 0,\n", " 'l': {'o': {'i': {'v': {'FIN': 0}}}}},\n", " 'n': {'e': {'l': {'l': {'i': {'b': {'m': {'o': {'b': {'FIN': 0}}}}}}}}}},\n", " 'o': {'FIN': 0, '\u2014': {'FIN': 0}},\n", " 's': {'e': {'l': {'l': {'e': {'y': {'o': {'v': {'FIN': 0}}},\n", " 'u': {'r': {'c': {'FIN': 0}}},\n", " 'b': {'m': {'o': {'FIN': 0}}, 'FIN': 0}}},\n", " 'FIN': 0,\n", " 'c': {'y': {'c': {'FIN': 0}}}},\n", " 'c': {'n': {'a': {'s': {'s': {'i': {'a': {'n': {'FIN': 0}}}}}},\n", " 'e': {'l': {'i': {'s': {'FIN': 0}}}}}},\n", " 't': {'n': {'e': {'t': {'a': {'l': {'FIN': 0}},\n", " 'FIN': 0,\n", " 'i': {'n': {'\u00e9': {'p': {'FIN': 0}}}}}},\n", " 'a': {'t': {'a': {'l': {'c': {'\u00e9': {'FIN': 0}}}}}}}},\n", " 'd': {'FIN': 0,\n", " 'i': {'r': {'i': {'v': {'FIN': 0}}, 'FIN': 0}},\n", " 'n': {'o': {'m': {'FIN': 0}}}},\n", " 'h': {'c': {'u': {'o': {'m': {'FIN': 0}}}}},\n", " 'r': {'p': {'r': {'u': {'o': {'p': {'FIN': 0}}}}},\n", " 'v': {'\u00e8': {'l': {'FIN': 0}}}},\n", " 's': {'s': {'e': {'r': {'v': {'i': {'FIN': 0}}}}}, 'FIN': 0},\n", " 'g': {'n': {'a': {'r': {'t': {'\u00e9': {'FIN': 0}}}, 'FIN': 0}}}},\n", " 'o': {'v': {'FIN': 0}},\n", " 'r': {'u': {'e': {'t': {'n': {'a': {'u': {'p': {'FIN': 0}}}}},\n", " 'p': {'a': {'v': {'FIN': 0}}},\n", " 'd': {'i': {'r': {'t': {'s': {'FIN': 0}}}}}}},\n", " 'e': {'i': {'c': {'a': {'l': {'g': {'FIN': 0}}}}, 'f': {'FIN': 0}},\n", " 'm': {'FIN': 0}}},\n", " 'i': {'o': {'r': {'FIN': 0}}, 't': {'\u00e2': {'p': {'FIN': 0}}}},\n", " 'c': {'n': {'a': {'l': {'b': {'FIN': 0}}}}},\n", " 'n': {'o': {'s': {'s': {'i': {'r': {'f': {'FIN': 0}}}}}},\n", " 'a': {'d': {'FIN': 0}},\n", " 'i': {'v': {'i': {'d': {'FIN': 0}}}}},\n", " 't': {'n': {'e': {'m': {'e': {'r': {'b': {'i': {'v': {'FIN': 0}}}}}}},\n", " 'o': {'r': {'f': {'FIN': 0}}}}},\n", " '\u00e9': {'m': {'e': {'s': {'FIN': 0}}},\n", " 's': {'r': {'e': {'v': {'a': {'r': {'t': {'FIN': 0}}}}}}}},\n", " 'd': {'n': {'a': {'r': {'g': {'FIN': 0}}}}}},\n", " 'd': {'FIN': 0},\n", " 'g': {'n': {'a': {'s': {'FIN': 0}}}},\n", " '\u00e9': {'h': {'c': {'a': {'r': {'c': {'FIN': 0}}}}}},\n", " 'x': {'i': {'a': {'p': {'FIN': 0}}},\n", " 'u': {'a': {'m': {'i': {'n': {'a': {'FIN': 0}}}}, 'FIN': 0},\n", " 'e': {'i': {'d': {'u': {'t': {'s': {'FIN': 0}}}}}, 'y': {'FIN': 0}}}},\n", " 'l': {'FIN': 0},\n", " 'n': {'o': {'r': {'i': {'a': {'l': {'c': {'FIN': 0}}}}},\n", " 'y': {'a': {'r': {'FIN': 0}}}},\n", " 'i': {'e': {'l': {'p': {'FIN': 0}}}}}}"]}, "execution_count": 17, "metadata": {}, "output_type": "execute_result"}], "source": ["mots = extract_words(poeme)\n", "trie = build_trie(mots)\n", "trie"]}, {"cell_type": "markdown", "metadata": {}, "source": ["C'est illisible. On ne montre que les mots se terminant par ``tes``."]}, {"cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [{"data": {"text/plain": ["{'n': {'e': {'t': {'a': {'l': {'FIN': 0}},\n", " 'FIN': 0,\n", " 'i': {'n': {'\u00e9': {'p': {'FIN': 0}}}}}},\n", " 'a': {'t': {'a': {'l': {'c': {'\u00e9': {'FIN': 0}}}}}}}}"]}, "execution_count": 18, "metadata": {}, "output_type": "execute_result"}], "source": ["trie['s']['e']['t']"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Toujours pas tr\u00e8s partique. On veut repr\u00e9senter l'arbre visuellement ou tout du moins une sous-partie. On utilise le langage [DOT](https://fr.wikipedia.org/wiki/Dot)."]}, {"cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [{"name": "stdout", "output_type": "stream", "text": ["digraph{\n", "set2188668603648 [label=\"set\"];\n", "n2188668603008 [label=\"n\"];\n", "set2188668603648 -> n2188668603008;\n", "e2188668601536 [label=\"e\"];\n", "n2188668603008 -> e2188668601536;\n", "t2188668603136 [label=\"t\"];\n", "e2188668601536 -> t2188668603136;\n", "a2188668602112 [label=\"a\"];\n", "t2188668603136 -> a2188668602112;\n", "l2188668602944 [label=\"l\"];\n", "a2188668602112 -> l2188668602944;\n", "FIN2188593162512 [label=\"FIN\"];\n", "l2188668602944 -> FIN2188593162512;\n", "\n", "FIN2188593162512 [label=\"FIN\"];\n", "t2188668603136 -> FIN2188593162512;\n", "\n", "i2188668883392 [label=\"i\"];\n", "t2188668603136 -> i2188668883392;\n", "n2188668883456 [label=\"n\"];\n", "i2188668883392 -> n2188668883456;\n", "\u00e92188668883520 [label=\"\u00e9\"];\n", "n2188668883456 -> \u00e92188668883520;\n", "p2188668883584 [label=\"p\"];\n", "\u00e92188668883520 -> p2188668883584;\n", "FIN2188593162512 [label=\"FIN\"];\n", "p2188668883584 -> FIN2188593162512;\n", "\n", "a2188668617536 [label=\"a\"];\n", "n2188668603008 -> a2188668617536;\n", "t2188668618304 [label=\"t\"];\n", "a2188668617536 -> t2188668618304;\n", "a2188668617856 [label=\"a\"];\n", "t2188668618304 -> a2188668617856;\n", "l2188668363712 [label=\"l\"];\n", "a2188668617856 -> l2188668363712;\n", "c2188668659456 [label=\"c\"];\n", "l2188668363712 -> c2188668659456;\n", "\u00e92188667556608 [label=\"\u00e9\"];\n", "c2188668659456 -> \u00e92188667556608;\n", "FIN2188593162512 [label=\"FIN\"];\n", "\u00e92188667556608 -> FIN2188593162512;\n", "\n", "}\n"]}], "source": ["def build_dot(trie, predecessor=None, root_name=None, depth=0):\n", " rows = []\n", " root = trie\n", " if predecessor is None:\n", " rows.append('digraph{')\n", " rows.append('%s%d [label=\"%s\"];' % (\n", " root_name or 'ROOT', id(trie), root_name or 'ROOT'))\n", " rows.append(build_dot(trie, root_name or 'ROOT', depth=depth))\n", " rows.append(\"}\")\n", " elif isinstance(trie, dict):\n", " for k, v in trie.items():\n", " rows.append('%s%d [label=\"%s\"];' % (k, id(v), k))\n", " rows.append(\"%s%d -> %s%d;\" % (predecessor, id(trie), k, id(v)))\n", " rows.append(build_dot(v, k, depth=depth+1)) \n", " return \"\\n\".join(rows)\n", "\n", "text = build_dot(trie['s']['e']['t'], root_name='set')\n", "print(text)"]}, {"cell_type": "code", "execution_count": 19, "metadata": {"scrolled": false}, "outputs": [{"data": {"text/html": ["
\n", ""], "text/plain": [""]}, "execution_count": 20, "metadata": {}, "output_type": "execute_result"}], "source": ["from jyquickhelper import RenderJsDot\n", "RenderJsDot(text, width=\"100%\")"]}, {"cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [{"data": {"text/plain": ["['s', 'e', 'l', 'l', 'e', 'b', 'FIN']"]}, "execution_count": 21, "metadata": {}, "output_type": "execute_result"}], "source": ["def plus_grand_suffix_commun_dictionnaire_trie(mots):\n", " whole_trie = build_trie(mots)\n", " \n", " def walk(trie):\n", " best = []\n", " for k, v in trie.items():\n", " if isinstance(v, int):\n", " continue\n", " r = walk(v)\n", " if len(r) > 0 and len(r) + 1 > len(best):\n", " best = [k] + r\n", " if len(best) > 0:\n", " return best\n", " if len(trie) >= 2:\n", " return ['FIN']\n", " return []\n", " \n", " return walk(whole_trie)\n", "\n", "\n", "res = plus_grand_suffix_commun_dictionnaire_trie(mots)\n", "res"]}, {"cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [{"data": {"text/plain": ["(6, 'tentes', ('latentes', 'tentes'))"]}, "execution_count": 22, "metadata": {}, "output_type": "execute_result"}], "source": ["res = plus_grand_suffix_commun_dictionnaire(mots)\n", "res"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Le r\u00e9sultat est diff\u00e9rent car le dictionnaire ne garantit pas que les \u00e9l\u00e9ments seront parcourus dans l'ordre alphab\u00e9tique."]}, {"cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [{"data": {"text/plain": ["0.18440039999998703"]}, "execution_count": 23, "metadata": {}, "output_type": "execute_result"}], "source": ["debut = perf_counter()\n", "for i in range(100):\n", " plus_grand_suffix_commun_dictionnaire(mots)\n", "perf_counter() - debut"]}, {"cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [{"data": {"text/plain": ["0.057187499999997726"]}, "execution_count": 24, "metadata": {}, "output_type": "execute_result"}], "source": ["debut = perf_counter()\n", "for i in range(100):\n", " plus_grand_suffix_commun_dictionnaire_trie(mots)\n", "perf_counter() - debut"]}, {"cell_type": "markdown", "metadata": {}, "source": ["Mais c'est beaucoup plus rapide."]}, {"cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": []}], "metadata": {"kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.5"}}, "nbformat": 4, "nbformat_minor": 2}