Index search returns duplicate entries

When I double-click on an index entry in the compiled .chm to
open it, the Topics Found window opens with 2 duplicate entries --
even though the topics are only entered in the index once. Any
ideas?

Hi Bill
First off, I'm pleased Cindy's issue was resolved. However,
what you explained as the reason doesn't seem to match my
understanding of RoboHelp. So naturally I have questions.
You said:
These files will start with the letters "rlt" and are created by
the system whenever you edit a file that is flagged as read
only.
Now I've always understood that these little beasties only
get created when you
preview a topic. I do know that it's not uncommon to find
them lurking about among your true project files. I've always
assumed that something happened that caused RoboHelp not to be able
to clean up after itself, leaving these orphan files behind. I've
also always advised students that these files are totally safe to
blow away, as they are simply temporary files created and used
during the process of previewing.
So I performed a little test. I used a small test project and
flagged one of my topic files as read only. I then tried to edit
the file. It opened just dandy in the WYSIWYG editor. But when I
tried to edit, I received a message advising the file was read only
and asking if I really intended to edit. After saying yes, the read
only flag was removed and the edit happened just fine. During this
whole process, I monitored the folder location and never saw the
rlt file appear.
I then tested my preview theory. I previewed the topic and
flipped over to Windows Explorer and sure enough, there was my
little additional rlt file.
Now here's the kicker. I was trying to duplicate what Cindy
reported. But regardless of how I try to induce it, I'm not able
to. Because RoboHelp normally removes the rlt file after you close
the preview, I sneaked behind RoboHelp's back and flagged the rlt
as read only. This way RoboHelp couldn't delete it and clean up. I
then closed the preview. Compiled and tested and no duplicate Index
entries. I also examined the compiled .CHM for the existence of any
file beginning with rlt. Again, nothing.
Needless to say, I'm more than a bit confused at this point.
Cheers... Rick

Similar Messages

  • 11g Search Index is returning duplicate entries

    We are building up our 11g environments in preparation for migrating over from 10g. In one of our env's (UAT), we have an issue where, if we do a search on a Metadata field or Full Text value, the search engine is returning duplicate results - i.e. each document is listed twice.
    I have done several full rebuilds, including one where I deleted the IdcColl1 folder before kicking it off. No luck at fixing the issue.
    The environment is configured for Full Text searching using Sql Server 2005.
    I am not worried so much about finding out why this is like it is, I just want to fix it because I am running short of time. Any ideas?
    Thanks,
    Alec

    Srinath
    Thanks for your suggestion. I tried the process you described and did a full Reindex (which took several hours). However, when I did the search again this morning, I am still getting the duplicates.
    I am now thinking of exporting all the content using Archiver, reindexing the empty environment and then chcking that the index is empty. This way I have aclean sheet to start with. I would then reimport the content and do another reindex. Hopefully then, the index should be clean. My suspision is that after I have done the export and reindex, that there will still be stuff in the index. Have to wait and see.....
    Cheers

  • Masterdata Duplicate Entries Check

    Hi All,
    I would like to search for duplicate entries for field WBS_ELM_EX in the table /BI0/MWBS_ELEMT.
    This table has 612,976 records.
    Is there any program / functional mode available which lists out duplicate entries.
    Can anyone help me in this.

    Hi Mani,
    Goto RSRV
    All Combined Tests
    Master Data
    Check Master Data for a Characteristic
    Regards,
    Ramkumar.

  • Binary Search Tree Question - Duplicate Entries

    I am doing a project for school where I have to parse a web page into a binary search tree. The problem I'm encountering is that the tree is not supposed to have duplicate entries. Instead, each node has a count variable which is supposed to be incremented if a duplicate node attempts to be inserted. I have everything else working but I can't seem to get the count to increment.
    Here is the insert method in my BinarySearchTree class:
        public void insert( String s )
            BTNode newNode = new BTNode();
            newNode.sData = s;
            newNode.count = 1;
            if( root == null )
                root = newNode;
            else
                BTNode current = root;
                BTNode parent;
                while( true )
                    parent = current;
                    if( s.compareTo( current.sData ) < 0 )
                        current = current.leftChild;
                        if( current == null )
                            parent.leftChild = newNode;
                            return;
                    else if( s.compareTo( current.sData ) >= 0 )
                        current = current.rightChild;
                        if( current == null )
                            parent.rightChild = newNode;
                            return;
        }Thanks for the help!

    runningfish007 wrote:
    I am doing a project for school where I have to parse a web page into a binary search tree. The problem I'm encountering is that the tree is not supposed to have duplicate entries. Instead, each node has a count variable which is supposed to be incremented if a duplicate node attempts to be inserted. There are two ways I can think to do this simply. Presumably you have a search method that returns a BTNode for a given String? You could use that to find out if the tree already contains that String. If so, you have that Node, so you can just increment it's counter. If not, then go through and insert it normally, knowing that you won't encounter a duplicate.
    The second is to look at the children for each node as you come to it. If either matches the given string, then you've found a duplicate. Otherwise, either insert the new element there (if that's the location it belongs), or continue down the tree.
    Either way, I'd rewrite this as a recursive method. It's much simpler. What you do is check if the new element should be a child of the current node. If so, attach it and return. Otherwise, call the method on the correct child node. No looping necessary.

  • Index search with preference

    My java object has three fields A,B,C. And i'm indexing on these three field's values.
    There can be duplicate entries , like Obj1's A equal to Obj2's B.
    While searching the index , if more than one object found, preference should be in order A->B->C, only one object shd be returned and others should be filtered out.
    As performance is critical criteria in application i don't want to use filter (as there can be 100s of big java objects expected to be returned ,iterating through these can hamper performance.)
    Can it be acheived through ChainedExtractor or any other better way?
    Thanks in advance.
    Prashant

    Hi Prashant,
    please check the following example:
    public class ClassedQueryUtil {
         public static class ClassedQueryParameters implements PortableObject, Serializable {
              private List<ValueExtractor> extractors;
              private Object queryFor;
              public ClassedQueryParameters(List<ValueExtractor> extractors, Object queryFor) {
                   super();
                   this.extractors = extractors;
                   this.queryFor = queryFor;
              public ClassedQueryParameters() {
                   super();
              public void readExternal(PofReader reader) throws IOException {
                   extractors = (List<ValueExtractor>) reader.readCollection(0, null);
                   queryFor = reader.readObject(1);
              public void writeExternal(PofWriter writer) throws IOException {
                   writer.writeCollection(0, extractors);
                   writer.writeObject(1, queryFor);
              public List<ValueExtractor> getExtractors() {
                   return extractors;
              public Object getQueryFor() {
                   return queryFor;
         public static class ClassedQueryFilter  extends ClassedQueryParameters implements IndexAwareFilter {
              public ClassedQueryFilter(List<ValueExtractor> extractors, Object queryFor) {
                   super(extractors, queryFor);
              public ClassedQueryFilter() {
                   super();
              public Filter applyIndex(Map mapIndexes, Set candidateBinaryKeys) {
                   List<ValueExtractor> extractors = getExtractors();
                   Object queryFor = getQueryFor();
                   boolean foundNoMatch = true;
                   if (mapIndexes.keySet().containsAll(extractors)) {
                        for (ValueExtractor extractor : extractors) {
                             MapIndex index = (MapIndex) mapIndexes.get(extractor);
                             Collection reverse = (Collection) index.get(queryFor);
                             if (reverse != null && !reverse.isEmpty()) {
                                  if (!Collections.disjoint(reverse, candidateBinaryKeys)) {
                                       candidateBinaryKeys.retainAll(reverse);
                                       foundNoMatch = false;
                                       break;
                   if (foundNoMatch) {
                        candidateBinaryKeys.clear();
                   return null;
              public int calculateEffectiveness(Map mapIndexes, Set candidateBinaryKeys) {
                   List<ValueExtractor> extractors = getExtractors();
                   if (mapIndexes.keySet().containsAll(extractors)) {
                        return 0;
                   throw new UnsupportedOperationException("Must have all indexes defined!!!");
              public boolean evaluateEntry(Entry entry) {
                   throw new UnsupportedOperationException("Must have all indexes defined!!!");
              public boolean evaluate(Object value) {
                   throw new UnsupportedOperationException("Must have all indexes defined!!!");
         public static class ClassedQueryAggregator extends ClassedQueryParameters
         implements ParallelAwareAggregator {
              public ClassedQueryAggregator() {
                   super();
              public ClassedQueryAggregator(List<ValueExtractor> extractors, Object queryFor) {
                   super(extractors, queryFor);
              public Collection aggregateResults(Collection results) {
                   int resultClassSoFar = Integer.MAX_VALUE;
                   ArrayList resultValues = new ArrayList();
                   for (Object object : results) {
                        Object[] result = (Object[]) object;
                        int resultClass = (Integer) result[0];
                        if (resultClass == -1 || resultClass > resultClassSoFar) {
                             continue;
                        } else {
                             if (resultClass < resultClassSoFar) {
                                  resultValues.clear();
                                  resultClassSoFar = resultClass;
                             Object[] resInner = (Object[]) result[1];
                             resultValues.ensureCapacity(resultValues.size()+resInner.length);
                             for (Object resultElement : resInner) {
                                  resultValues.add(resultElement);
                   if (!resultValues.isEmpty()) {
                        if (resultValues.get(0) instanceof Binary) {
                             for (int i=resultValues.size()-1; i>=0; --i) {
                                  resultValues.set(i, ExternalizableHelper.fromBinary((Binary)resultValues.get(i)));
                   return resultValues;
              public EntryAggregator getParallelAggregator() {
                   return this;
              public Object aggregate(Set entries) {
                   if (entries.isEmpty()) {
                        return new Object[] { new Integer(-1) };
                   QueryMap.Entry first = (QueryMap.Entry) entries.iterator().next();
                   Object[] resInner = new Object[entries.size()];
                   List<ValueExtractor> extractors = getExtractors();
                   Object queryFor = getQueryFor();
                   int resultClass = 0;
                   int numExtractors = extractors.size();
                   for (resultClass = 0; resultClass < numExtractors; ++resultClass) {
                        if (queryFor.equals(first.extract(extractors.get(resultClass)))) {
                             break;
                   boolean isBinaryEntry = first instanceof BinaryEntry;
                   int i=0;
                   for (Object entry : entries) {
                        if (isBinaryEntry) {
                             resInner[i] = ((BinaryEntry)entry).getBinaryValue();
                        } else {
                             resInner[i] = ((Map.Entry) entry).getValue();
                        ++i;
                   return new Object[] { resultClass, resInner };
         public static Collection<Map.Entry> queryForEntries(NamedCache cache, List<ValueExtractor> extractors, Object queryFor) {
              int resultClassSoFar = Integer.MAX_VALUE;
              Set results = cache.entrySet(new ClassedQueryFilter(extractors, queryFor));
              ArrayList resultValues = new ArrayList();
              for (Object object : results) {
                   Map.Entry entry = (Entry) object;
                   int resultInClass = 0;
                   int numExtractors = extractors.size();
                   for (resultInClass = 0; resultInClass < numExtractors; ++resultInClass) {
                        Object extracted = InvocableMapHelper.extractFromEntry(extractors.get(resultInClass), entry);
                        if (queryFor.equals(extracted)) {
                             break;
                   if (resultInClass > resultClassSoFar) {
                        continue;
                   if (resultInClass < resultClassSoFar) {
                        resultValues.clear();
                        resultClassSoFar = resultInClass;
                   resultValues.add(entry);
              return resultValues;
         public static Collection aggregateForValues(NamedCache cache, List<ValueExtractor> extractors, Object queryFor) {
              return (Collection) cache.aggregate(new ClassedQueryFilter(extractors, queryFor), new ClassedQueryAggregator(extractors, queryFor));
    }To use it, invoke one of the two static methods on the ClassedQueryUtil class.
    You must use a partitioned cache (distributed or near cache) for this to operate.
    This depends on the Coherence 3.5 prerelease which has BinaryEntry. This allows the parallel aggregator on the storage node to avoid deserializing the entry value just to return it. If you don't have that then all candidate entries will be deserialized and reserialized on the storage node. Not that the candidate entry is an entry which has not been filtered away in the filter, so it is to be returned to the client to the best knowledge of the storage node, in the above example the entries matching on field B when there was not an entry matching on field A in that node.
    Best regards,
    Robert

  • Right Click Move menu has duplicate entries

    One of my users frequently sorts mail into subfolders on a shared mailbox. They most commonly do this by right clicking the message, selecting Move, then selecting the folder. 
    Somehow, they ended up with a duplicate entry in the right click menu. In other words, the folder "October 2014" appears twice in the list.
    The first question is: how can the duplicate entry be removed?
    Secondly, one of these menu items works. When it is selected, the message can then be found in the "October 2014" as expected. The other of these menu items does not work. When it is selected, the mail is no longer in it's original location (inbox,
    etc.) but  also does not appear in the "October 2014" folder. Searching her mailbox (even searching all of Outlook) returns no results when searching for a message moved this way. I've
    also used Get-mailboxfolderstatistics to examine the folder structure of both her account and the shared mailbox account. Only the one folder named "October 2014" appears to exist.
    Second question: How can I find where messages moved with the duplicate entry were moved to? How can I move them to the correct folder?

    Hi,
    Do you mean there is a duplicate entry in the Move To Folder history that you see when selecting
    Move?
    We can try to clear the Move To Folder history to check the result. To do this, please follow:
    1. Exit Outlook.
    2. Open your Registry Editor and navigate to:
    Outlook 2013
    HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Outlook\Profiles\profile name\0a0d020000000000c000000000000046
    Outlook 2010 or previous on Windows NT, 2000, XP, Vista, Windows 7 or Windows 8
    HKEY_CURRENT_USER\Software\Microsoft\Windows NT\CurrentVersion\Windows Messaging Subsystem\Profiles\profile name\0a0d020000000000c000000000000046      
    3. Find the registry value named 101f031e.
    4. Rename or delete the value and restart Outlook.
    Now, the Move To Folder history will be cleared and next time you move a email to a folder, Outlook will add the entry to the history again.
    For the second question, how did the user add the shared mailbox in his Outlook? Please note that in some configurations, you are limited to using the
    Current Folder search scope when searching in a shared mailbox. Please try to directly login to the shared mailbox and then search for the disappeared emails. Please let me know the result.
    Hope this helps.
    Regards,
    Steve Fan
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • Validating Duplicate Entries In itab

    Hi All
       How to approach for restricting duplicate entries in the Database?
    The scene is that I've to enter some data in a table control . and while saving, I've to check whether the same entry already exists in the database or not.
    How to approach for this? Awaiting your valuable response.
    Thanks in advance
    Pradipta

    Hai Mishra
    DELETE ADJACENT DUPLICATES FROM itab.
    Additions
    1. ... COMPARING f1 f2 ...
    2. ... COMPARING ALL FIELDS
    Effect
    Deletes neighboring, duplicate entries from the internal table itab . If there are n duplicate entries, the first entry is retained and the other n - 1 entries are deleted.
    Two lines are considered to be duplicated if their default keys match.
    The return code value is set as follows:
    SY-SUBRC = 0 At least one duplicate exists, at least one entry deleted.
    SY_SUBRC = 4 No duplicates exist, no entry deleted.
    Addition 1
    ... COMPARING f1 f2 ...
    Effect
    Two lines of the internal table itab are considered to be duplicates if the specified fields f1 , f2 , .... match.
    Addition 2
    ... COMPARING ALL FIELDS
    Effect
    Two lines are considered to be duplicates if all fields of the table entries match.
    Notes
    The DELETE ADJACENT DUPLICATES statement is especially useful if the internal table itab is sorted by fields (whether in ascending or descending order) which were compared during duplicate determination. In this case, the deletion of neighbouring duplicates is the same as the deletion of all duplicates.
    If a comparison criterion is only known at runtime, it can be specified dynamically as the content of a field name by using COMPARING ... (name) ... . If name is blank at runtime, the comparison criterion is ignored. If name contains an invalid component name, a runtime error occurs.
    Comparison criteria - statistically or dynamically specified - can be further restriced by specifying the offset and/or length.
    Note
    Performance
    Deleting a line from an internal table incurs index maintenance costs which depend on the index of the line to be deleted. The runtime depends on the line width of the table.
    For example, deleting a line in the middle of an internal table with 200 entries requires about 10 msn (standardized microseconds).
    Deleting a range of entries with " DELETE itab FROM idx1 TO idx2. " deleting a set of entries with " DELETE itab WHERE ... " only incur index maintenance costs once. Compared with a LOOP , which deletes line-by-line, this is much faster.
    To delete neighboring, duplicate entries from an internal table, use the variant " DELETE ADJACENT DUPLICATES FROM itab. " instead of LOOP constructions.
    Thanks & regards
    Sreeni

  • MySQL duplicate entry deletion in 5.5.9

    Hey all,
    I've noticed that in the newest MySQL, I can no longer do:
    ALTER TABLE test ADD UNIQUE INDEX unique_index (id);
    ALTER TABLE test DROP INDEX unique_index;
    to remove duplicates. I get the following error:
    ERROR 1062 (23000): Duplicate entry '3' for key 'unique_index']
    This worked fine before the major SQL upgrade. It also works fine on version 5.1.52 on my web host.  I searched through the 5.5.9 release notes and I didn't see anything that should have changed this, but I'm no MySQL guru! Is this a feature or a bug? Any alternate suggestions for deleting duplicates that will work on this version and older versions of MySQL?
    Thanks!
    Scott
    Edit: Just for clarity, the table I'm working on has a single primary key column (UNIQUE_KEY) (with 80+ other data columns) and I'm only detecting the duplicate row based on the value of the primary key column.
    Last edited by firecat53 (2011-02-25 03:33:30)

    Alright, since I was apparently sucked in by this particular method the first time I googled the answer, care to enlighten me on the 'more practical' non-joke methods, as there appears to be more than one option.
    Also, why is that a joke of a feature? Like I said, I'm no MySQL guru so I'm ready to learn!
    Thanks,
    Scott
    edit: is this a 'more practical' method? It's hard when you're learning on your own to sort out the good from the bad sometimes
    edit2: using the above method with this table:
    +------+------+-------+
    | id | data | data1 |
    +------+------+-------+
    | 1 | 10 | 100 |
    | 2 | 100 | 1000 |
    | 3 | 1000 | 500 |
    | 3 | 1000 | 500 |
    +------+------+-------+
    and this query:
    delete test from test left outer join (select min(id) as RowID, id from test group by id) as KeepRows on test.id=KeepRows.RowID where KeepRows.RowID is null;
    I can't quite get it to work...it doesn't delete any rows. What am I missing there?
    edit3: This method works -- is it a 'good' method or should I be looking at something different? Again, I'm just matching on duplicates in the primary key column (before the primary key is assigned):
    create table test1 like test;
    insert into test1 select * from test group by id;
    drop table test;
    rename table test1 to test;
    Last edited by firecat53 (2011-02-25 03:55:10)

  • ORA-13223: duplicate entry for string in SDO_GEOM_METADATA table and

    I got the above error while trying to insert a record into the table SDO_GEOM_METADATA. However, when querying this table I did find any duplicate table_name, column_name pair that match with the error.
    Here are the steps that I worked on:
    1. Add a geometry column into an existing table.
    ALTER TABLE GEO_MAP ADD (ORG_GEOM mdsys.sdo_geometry);
    2. Register the new column into mdsys
    insert into USER_SDO_GEOM_METADATA
    values ('GEO_MAP','ORG_GEOM',
    mdsys.sdo_dim_array(
    mdsys.sdo_dim_element('LONG',-180,180,0.00005),
    mdsys.sdo_dim_element('LAT',-90,90,0.00005)
    8307)
    I got the error ORA-13223: duplicate entry for string in SDO_GEOM_METADATA table even there was no such record in there.
    3. I inserted values in the column ORG_GEOM fine.
    4. However, when I tried to create the index for this column I got the error:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13203: failed to read USER_SDO_GEOM_METADATA view
    ORA-13203: failed to read
    Please help.
    Thanks.

    SQL> select * from mdsys.sdo_geom_metadata_table;
    SDO_OWNER SDO_TABLE_NAME
    SDO_COLUMN_NAME
    SDO_DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)
    SDO_SRID
    QW_USER1 GEO_REF
    LOC_GEOM
    SDO_DIM_ARRAY(SDO_DIM_ELEMENT('LONG', -180, 180, .00005), SDO_DIM_ELEMENT('LAT',
    -90, 90, .00005))
    SDO_OWNER SDO_TABLE_NAME
    SDO_COLUMN_NAME
    SDO_DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)
    SDO_SRID
    8307
    MDSYS SDO_CMT_CBK_RTREE_TAB
    GEOM
    SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .000000005), SDO_DIM_ELEMENT('Y',
    SDO_OWNER SDO_TABLE_NAME
    SDO_COLUMN_NAME
    SDO_DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)
    SDO_SRID
    -90, 90, .000000005))
    The situation is we have 2 tables (GEO_MAP, and GEO_REF) that have spatial columns. Everything worked fine until when one of the queries that searched through the table GEO_MAP ran so slow, we decided to rebuild the related spatial index by dropping and recreating it. However, after I dropped it I could not recreated. Keep getting the error:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13203: failed to read USER_SDO_GEOM_METADATA view
    ORA-13203: failed to read
    Any help is very much appreciated.

  • Duplicate entry for WebUpdatePassword

    Hi,
    After siebel installtion , I found there is a duplicate entry for WebUpdatePassword in C:\sea78\SWEApp\BIN\eapps.cfg file. i.e. I can able to login to the siebel GUI after remove the duplicate line i.e "WebUpdatePassword = xxxxxxxxxxxxxxxxxxxxxxxxxxxx" . Please help me , If there is any wrong with installtion or it is a Siebel BUG. Please let me know , if I can provide any further information.
    [salesce_psj]
    ConnectString = siebel.TCPIP.None.None://host130:2321/sebl78eip/SalesCEObjMgr_psj
    EnableExtServiceOnly = TRUE
    WebPublicRootDir = c:\sea78\SWEApp\public\psj
    WebUpdatePassword = xxxxxxxxxxxxxxxxxxxxxxxxxxxx
    WebUpdatePassword = xxxxxxxxxxxxxxxxxxxxxxxxxxxx
    [smc_psj]
    ConnectString = siebel.TCPIP.None.None://host130:2321/sebl78eip/SMCObjMgr_psj
    WebPublicRootDir = c:\sea78\SWEApp\public\psj
    WebUpdatePassword = xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    Kind Regards
    Khirod Patra

    SQL> select * from mdsys.sdo_geom_metadata_table;
    SDO_OWNER SDO_TABLE_NAME
    SDO_COLUMN_NAME
    SDO_DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)
    SDO_SRID
    QW_USER1 GEO_REF
    LOC_GEOM
    SDO_DIM_ARRAY(SDO_DIM_ELEMENT('LONG', -180, 180, .00005), SDO_DIM_ELEMENT('LAT',
    -90, 90, .00005))
    SDO_OWNER SDO_TABLE_NAME
    SDO_COLUMN_NAME
    SDO_DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)
    SDO_SRID
    8307
    MDSYS SDO_CMT_CBK_RTREE_TAB
    GEOM
    SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .000000005), SDO_DIM_ELEMENT('Y',
    SDO_OWNER SDO_TABLE_NAME
    SDO_COLUMN_NAME
    SDO_DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)
    SDO_SRID
    -90, 90, .000000005))
    The situation is we have 2 tables (GEO_MAP, and GEO_REF) that have spatial columns. Everything worked fine until when one of the queries that searched through the table GEO_MAP ran so slow, we decided to rebuild the related spatial index by dropping and recreating it. However, after I dropped it I could not recreated. Keep getting the error:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13203: failed to read USER_SDO_GEOM_METADATA view
    ORA-13203: failed to read
    Any help is very much appreciated.

  • Deleting duplicate entries based on one column

    Guys,
    I have two internal tables i_tab1 and i_tab2.
    I want to copy contents of i_tab1 into i_tab2 while removing any duplicate entry.
    i am comparing on just one column in i_tab1.

    Hi,
        Try this way
    LOOP AT itab2 INTO wa_itab2.
    READ TABLE itab1 INTO wa_itab1 WITH KEY field = wa_itab1-field BINARY SEARCH.
    IF sy-subrc = 0.
    code
    MODIFY itab2 INDEX sy-tabix FROM wa_itab2 TRANSPORTING field. " field is modified field
    ENDIF.
    ENDLOOP.
    Regards
    Bala Krishna

  • Un-indexed searches killing server

    I am suspecting our ldap server is getting killed with un-indexed searches. our v240 is overloaded and many searches time out. Performance is terrible to say the least. I have learned through a support call that an entry of notes=U is a sign that a search was not indexed. Note that I have many of the following in the log file:
    [23/May/2008:09:26:10 -0600] conn=3794491 op=26699 msgId=216518 - RESULT err=0 tag=101 nentries=14 etime=0 notes=U
    [23/May/2008:09:26:30 -0600] conn=3927871 op=1 msgId=2 - RESULT err=0 tag=101 nentries=1 etime=1 notes=U
    [23/May/2008:09:26:30 -0600] conn=3927871 op=2 msgId=3 - RESULT err=0 tag=101 nentries=1 etime=0 notes=U
    [23/May/2008:09:28:39 -0600] conn=3794491 op=26739 msgId=216709 - RESULT err=0 tag=101 nentries=46 etime=0 notes=U
    [23/May/2008:09:30:46 -0600] conn=3794492 op=1515 msgId=216909 - RESULT err=0 tag=101 nentries=64 etime=1 notes=U
    [23/May/2008:09:30:52 -0600] conn=3794493 op=254 msgId=216926 - RESULT err=4 tag=101 nentries=50 etime=0 notes=U
    [23/May/2008:09:33:46 -0600] conn=3794492 op=1532 msgId=217165 - RESULT err=0 tag=101 nentries=3 etime=0 notes=U
    m1ldap:root bash /var/opt/mps/serverroot/slapd-m1ldap/logs #
    If I look at one of these searches in more depth I see the following (from the access log)
    m1ldap:root bash /var/opt/mps/serverroot/slapd-m1ldap/logs # cat access | grep 217165
    [23/May/2008:09:33:46 -0600] conn=3794492 op=1532 msgId=217165 - SRCH base="pipstoreowner=s0237271,o=lethbridgecollege.ab.ca,o=piserverdb" scope=2 filter="(&(&(&(displayName=*)(memberOfPIBook=e11a076dda8d9f))(|(objectClass=PITYPEPERSON)(objectClass=PITYPEGROUP)))(objectClass=*))" attrs="displayName piEntryID piEntryID displayName displayName memberOfPIBook memberOfPIBook memberOfPIGroup memberOfPIGroup piEmail1CN piEmail1TransType piEmail1Type piEmail1 piEmail2CN piEmail2TransType piEmail2Type piEmail2 piEmail3CN piEmail3TransType piEmail3Type piEmail3 piEmail1 piPhone1Type piPhone1 piPhone2Type piPhone2 piPhone3Type piPhone3 piPhone4Type piPhone4 piPhone5Type piPhone5 piPhone6Type piPhone6 piPhone7Type piPhone7 piPhone8Type piPhone8 piPhone9Type piPhone9 piPhone10Type piPhone10 piPhone11Type piPhone11 piPhone12Type piPhone12 piPhone13Type piPhone13 piPhone14Type piPhone14 piPhone15Type piPhone15 piPhone16Type piPhone16 piPhone17Type piPhone17 piPhone18Type piPhone18 piPhone19Type piPhone19 piPhone20Type piPhone20 db:expr:db:officelocation||db:campu db:officelocation||db:campu db:campu db:buildin db:floo db:officenumber objectClass"
    [23/May/2008:09:33:46 -0600] conn=3794492 op=1532 msgId=217165 - SORT displayName (3)
    [23/May/2008:09:33:46 -0600] conn=3794492 op=1532 msgId=217165 - RESULT err=0 tag=101 nentries=3 etime=0 notes=U
    If I look at this filter in more depth:
    base="pipstoreowner=s0237271,o=lethbridgecollege.ab.ca,o=piserverdb" scope=2 filter="(&(&(&(displayName=*)(memberOfPIBook=e11a076dda8d9f))(|(objectClass=PITYPEPERSON)(objectClass=PITYPEGROUP)))(objectClass=*))"
    and I check our indexes.. they all seem to be there. What am I missing? How do I go about finding what is not indexed..then indexing it?
    Any help would be great.
    Thanks,
    Darren

    I think it is related to (objectClass=*) which I believe is not indexed when part of sub searches. May I suggest that you write a test script to try the 2 filters that follow, and see what turns up in the log.
    (&(&(&(displayName=*)(memberOfPIBook=e11a076dda8d9f))(|(objectClass=PITYPEPERSON)(objectClass=PITYPEGROUP)))(objectClass=*))
    (&(&(displayName=*)(memberOfPIBook=e11a076dda8d9f))(|(objectClass=PITYPEPERSON)(objectClass=PITYPEGROUP)))Both should return the same details, as you are filtering on objectclasses that are either PITYPEPERSON or PITYPEGROUP.
    Also check out [http://forum.java.sun.com/thread.jspa?threadID=5277506&tstart=210|http://forum.java.sun.com/thread.jspa?threadID=5277506&tstart=210] which has a similair issue, and seems to be related to having searches with multiple similar attributes in a filter.
    Edited by: nirving on May 27, 2008 1:24 PM
    Edited by: nirving on May 27, 2008 1:26 PM

  • SharePoint Foundation 2010 Search returns no results

    Hi,
    SharePoint foundation 2010 Search is not showing any results after configuring the step from the below link.
    http://wiki.sirkit.ca/2011/04/sharepoint-foundation-2010-search-returns-no-results/
    My Server Setup : 
    SharePoint Foundation 2010
    SQL Server 2008 r2
    We have created two web application and configured the search service as above mention link.
    And we have migrated a site from MOSS 2007 to SharePoint 2010 to one of  web application,we assigned search database to the newly migrated content database.
    Previously there was Form based Authentication was done on Moss 2007 site and on SP 2010 we are working on Windows authentication.
    Database showing crawl is working and showing success results as well in SQL table, Its only Not able to show the results on the page.
    Thanks in Advance
    Dinesh
    Regards, Dinesh Reddy.

    Don't look at the databases, it's a bad habit to get into and will lead to confusion.
    In search administration are you getting any sucsess messages in the crawl history and is the number of items in your index non-zero?
    If so that implies that the crawl process is working and it's the user security that's the issue.
    To confirm try searching for a file whilst logged in as the crawl account, that should return results.

  • Akonadi database error code: 1062 Duplicate entry for key

    Since my email provider migrated their webmail platform, I keep getting an error each time kmail attempts to get emails from their imap server: could not create collection INBOX resourceid: 17
    I double checked the provided imap settings have not changed and are correct : imap.laposte.net port 993 using SSL/TLS
    As you can see in the pasted akonadi self-test report below, I have a few errors related to innodb, the absence of mysql tables in local/share/akonadi/db_data/ and the table performance_schema having the wrong structure. Those have been there for several months but I've been unable to fix them and I didn't care because my email was working. I suppose they are unrelated to my current issues.
    Any help I can get is much welcome.
    Akonadi Server Self-Test Report
    ===============================
    Test 1: SUCCESS
    Database driver found.
    Details: The QtSQL driver 'QMYSQL' is required by your current Akonadi server configuration and was found on your system.
    File content of '/home/the_user/.config/akonadi/akonadiserverrc':
    [%General]
    Driver=QMYSQL
    [QMYSQL]
    Name=akonadi
    Host=
    Options="UNIX_SOCKET=/tmp/akonadi-the_user.kKTLSR/mysql.socket"
    ServerPath=/usr/bin/mysqld
    StartServer=true
    User=
    Password=
    [Debug]
    Tracer=null
    [QPSQL]
    StartServer=true
    Name=akonadi
    Host=
    User=
    Password=
    Port=5432
    [SQLITE]
    Name=akonadi
    Test 2: SUCCESS
    Akonadi is not running as root
    Details: Akonadi is not running as a root/administrator user, which is the recommended setup for a secure system.
    Test 3: SUCCESS
    MySQL server found.
    Details: You have currently configured Akonadi to use the MySQL server '/usr/bin/mysqld'.
    Make sure you have the MySQL server installed, set the correct path and ensure you have the necessary read and execution rights on the server executable. The server executable is typically called 'mysqld'; its location varies depending on the distribution.
    Test 4: SUCCESS
    MySQL server is executable.
    Details: MySQL server found: /usr/bin/mysqld Ver 10.0.14-MariaDB-log for Linux on x86_64 (MariaDB Server)
    Test 5: ERROR
    MySQL server log contains errors.
    Details: The MySQL server error log file &apos;<a href='/home/the_user/.local/share/akonadi/db_data/mysql.err'>/home/the_user/.local/share/akonadi/db_data/mysql.err</a>&apos; contains errors.
    File content of '/home/the_user/.local/share/akonadi/db_data/mysql.err':
    141129 9:34:33 [Warning] You need to use --log-bin to make --binlog-format work.
    2014-11-29 09:34:33 7f5b81d0b780 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.
    141129 9:34:33 [Note] InnoDB: Using mutexes to ref count buffer pool pages
    141129 9:34:33 [Note] InnoDB: The InnoDB memory heap is disabled
    141129 9:34:33 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
    141129 9:34:33 [Note] InnoDB: Memory barrier is not used
    141129 9:34:33 [Note] InnoDB: Compressed tables use zlib 1.2.8
    141129 9:34:33 [Note] InnoDB: Using Linux native AIO
    141129 9:34:33 [Note] InnoDB: Using CPU crc32 instructions
    141129 9:34:33 [Note] InnoDB: Initializing buffer pool, size = 80.0M
    141129 9:34:33 [Note] InnoDB: Completed initialization of buffer pool
    141129 9:34:33 [Note] InnoDB: Highest supported file format is Barracuda.
    141129 9:34:33 [Note] InnoDB: The log sequence numbers 66340678877 and 66340678877 in ibdata files do not match the log sequence number 66344113222 in the ib_logfiles!
    141129 9:34:33 [Note] InnoDB: Database was not shutdown normally!
    141129 9:34:33 [Note] InnoDB: Starting crash recovery.
    141129 9:34:33 [Note] InnoDB: Reading tablespace information from the .ibd files...
    141129 9:34:33 [Note] InnoDB: Restoring possible half-written data pages
    141129 9:34:33 [Note] InnoDB: from the doublewrite buffer...
    141129 9:34:34 [Note] InnoDB: 128 rollback segment(s) are active.
    141129 9:34:34 [Note] InnoDB: Waiting for purge to start
    141129 9:34:34 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.20-68.0 started; log sequence number 66344113222
    141129 9:34:34 [Note] Plugin 'FEEDBACK' is disabled.
    141129 9:34:34 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist
    141129 9:34:34 [Warning] Can't open and lock time zone table: Table 'mysql.time_zone_leap_second' doesn't exist trying to live without them
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_host_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_user_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_account_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'host_cache' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'objects_summary_global_by_type' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'setup_actors' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'setup_objects' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'table_io_waits_summary_by_index_usage' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'table_io_waits_summary_by_table' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'table_lock_waits_summary_by_table' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'threads' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_stages_current' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_stages_history' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_stages_history_long' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_stages_summary_by_thread_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_stages_summary_by_account_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_stages_summary_by_user_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_stages_summary_by_host_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_stages_summary_global_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_current' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_history' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_history_long' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_thread_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_account_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_user_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_host_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_summary_global_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_digest' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'users' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'accounts' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'hosts' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'socket_instances' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'socket_summary_by_instance' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'socket_summary_by_event_name' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'session_connect_attrs' has the wrong structure
    141129 9:34:34 [ERROR] Native table 'performance_schema'.'session_account_connect_attrs' has the wrong structure
    141129 9:34:34 [Warning] Failed to load slave replication state from table mysql.gtid_slave_pos: 1146: Table 'mysql.gtid_slave_pos' doesn't exist
    141129 9:34:34 [Note] Reading of all Master_info entries succeded
    141129 9:34:34 [Note] Added new Master_info '' to hash table
    141129 9:34:34 [Note] /usr/bin/mysqld: ready for connections.
    Version: '10.0.14-MariaDB' socket: '/tmp/akonadi-the_user.kKTLSR/mysql.socket' port: 0 MariaDB Server
    2014-11-29 09:34:34 7f5b81c77700 InnoDB: Error: Table "mysql"."innodb_table_stats" not found.
    141129 15:14:56 [Warning] Aborted connection 58 to db: 'akonadi' user: 'the_user' host: '' (Unknown error)
    Test 6: SUCCESS
    MySQL server default configuration found.
    Details: The default configuration for the MySQL server was found and is readable at <a href='/usr/share/config/akonadi/mysql-global.conf'>/usr/share/config/akonadi/mysql-global.conf</a>.
    File content of '/usr/share/config/akonadi/mysql-global.conf':
    # Global Akonadi MySQL server settings,
    # These settings can be adjusted using $HOME/.config/akonadi/mysql-local.conf
    # Based on advice by Kris Köhntopp <[email protected]>
    [mysqld]
    # strict query parsing/interpretation
    # TODO: make Akonadi work with those settings enabled
    # sql_mode=strict_trans_tables,strict_all_tables,strict_error_for_division_by_zero,no_auto_create_user,no_auto_value_on_zero,no_engine_substitution,no_zero_date,no_zero_in_date,only_full_group_by,pipes_as_concat
    # sql_mode=strict_trans_tables
    # DEBUGGING:
    # log all queries, useful for debugging but generates an enormous amount of data
    # log=mysql.full
    # log queries slower than n seconds, log file name relative to datadir (for debugging only)
    # log_slow_queries=mysql.slow
    # long_query_time=1
    # log queries not using indices, debug only, disable for production use
    # log_queries_not_using_indexes=1
    # mesure database size and adjust innodb_buffer_pool_size
    # SELECT sum(data_length) as bla, sum(index_length) as blub FROM information_schema.tables WHERE table_schema not in ("mysql", "information_schema");
    # NOTES:
    # Keep Innob_log_waits and keep Innodb_buffer_pool_wait_free small (see show global status like "inno%", show global variables)
    #expire_logs_days=3
    #sync_bin_log=0
    # Use UTF-8 encoding for tables
    character_set_server=utf8
    collation_server=utf8_general_ci
    # use InnoDB for transactions and better crash recovery
    default_storage_engine=innodb
    # memory pool InnoDB uses to store data dictionary information and other internal data structures (default:1M)
    # Deprecated in MySQL >= 5.6.3
    innodb_additional_mem_pool_size=1M
    # memory buffer InnoDB uses to cache data and indexes of its tables (default:128M)
    # Larger values means less I/O
    innodb_buffer_pool_size=80M
    # Create a .ibd file for each table (default:0)
    innodb_file_per_table=1
    # Write out the log buffer to the log file at each commit (default:1)
    innodb_flush_log_at_trx_commit=2
    # Buffer size used to write to the log files on disk (default:1M for builtin, 8M for plugin)
    # larger values means less I/O
    innodb_log_buffer_size=1M
    # Size of each log file in a log group (default:5M) larger means less I/O but more time for recovery.
    innodb_log_file_size=64M
    # # error log file name, relative to datadir (default:hostname.err)
    log_error=mysql.err
    # print warnings and connection errors (default:1)
    log_warnings=2
    # Convert table named to lowercase
    lower_case_table_names=1
    # Maximum size of one packet or any generated/intermediate string. (default:1M)
    max_allowed_packet=32M
    # Maximum simultaneous connections allowed (default:100)
    max_connections=256
    # The two options below make no sense with prepared statements and/or transactions
    # (make sense when having the same query multiple times)
    # Memory allocated for caching query results (default:0 (disabled))
    query_cache_size=0
    # Do not cache results (default:1)
    query_cache_type=0
    # Do not use the privileges mechanisms
    skip_grant_tables
    # Do not listen for TCP/IP connections at all
    skip_networking
    # The number of open tables for all threads. (default:64)
    table_open_cache=200
    # How many threads the server should cache for reuse (default:0)
    thread_cache_size=3
    # wait 365d before dropping the DB connection (default:8h)
    wait_timeout=31536000
    # We use InnoDB, so don't let MyISAM eat up memory
    key_buffer_size=16K
    [client]
    default-character-set=utf8
    Test 7: SUCCESS
    MySQL server custom configuration found.
    Details: The custom configuration for the MySQL server was found and is readable at <a href='/home/the_user/.config/akonadi/mysql-local.conf'>/home/the_user/.config/akonadi/mysql-local.conf</a>
    File content of '/home/the_user/.config/akonadi/mysql-local.conf':
    # workaround fix for akonadi db error after server upgrade
    # http://forum.kde.org/viewtopic.php?f=215&t=101004
    [mysqld]
    binlog_format=row
    Test 8: SUCCESS
    MySQL server configuration is usable.
    Details: The MySQL server configuration was found at <a href='/home/the_user/.local/share/akonadi/mysql.conf'>/home/the_user/.local/share/akonadi/mysql.conf</a> and is readable.
    File content of '/home/the_user/.local/share/akonadi/mysql.conf':
    # Global Akonadi MySQL server settings,
    # These settings can be adjusted using $HOME/.config/akonadi/mysql-local.conf
    # Based on advice by Kris Köhntopp <[email protected]>
    [mysqld]
    # strict query parsing/interpretation
    # TODO: make Akonadi work with those settings enabled
    # sql_mode=strict_trans_tables,strict_all_tables,strict_error_for_division_by_zero,no_auto_create_user,no_auto_value_on_zero,no_engine_substitution,no_zero_date,no_zero_in_date,only_full_group_by,pipes_as_concat
    # sql_mode=strict_trans_tables
    # DEBUGGING:
    # log all queries, useful for debugging but generates an enormous amount of data
    # log=mysql.full
    # log queries slower than n seconds, log file name relative to datadir (for debugging only)
    # log_slow_queries=mysql.slow
    # long_query_time=1
    # log queries not using indices, debug only, disable for production use
    # log_queries_not_using_indexes=1
    # mesure database size and adjust innodb_buffer_pool_size
    # SELECT sum(data_length) as bla, sum(index_length) as blub FROM information_schema.tables WHERE table_schema not in ("mysql", "information_schema");
    # NOTES:
    # Keep Innob_log_waits and keep Innodb_buffer_pool_wait_free small (see show global status like "inno%", show global variables)
    #expire_logs_days=3
    #sync_bin_log=0
    # Use UTF-8 encoding for tables
    character_set_server=utf8
    collation_server=utf8_general_ci
    # use InnoDB for transactions and better crash recovery
    default_storage_engine=innodb
    # memory pool InnoDB uses to store data dictionary information and other internal data structures (default:1M)
    # Deprecated in MySQL >= 5.6.3
    innodb_additional_mem_pool_size=1M
    # memory buffer InnoDB uses to cache data and indexes of its tables (default:128M)
    # Larger values means less I/O
    innodb_buffer_pool_size=80M
    # Create a .ibd file for each table (default:0)
    innodb_file_per_table=1
    # Write out the log buffer to the log file at each commit (default:1)
    innodb_flush_log_at_trx_commit=2
    # Buffer size used to write to the log files on disk (default:1M for builtin, 8M for plugin)
    # larger values means less I/O
    innodb_log_buffer_size=1M
    # Size of each log file in a log group (default:5M) larger means less I/O but more time for recovery.
    innodb_log_file_size=64M
    # # error log file name, relative to datadir (default:hostname.err)
    log_error=mysql.err
    # print warnings and connection errors (default:1)
    log_warnings=2
    # Convert table named to lowercase
    lower_case_table_names=1
    # Maximum size of one packet or any generated/intermediate string. (default:1M)
    max_allowed_packet=32M
    # Maximum simultaneous connections allowed (default:100)
    max_connections=256
    # The two options below make no sense with prepared statements and/or transactions
    # (make sense when having the same query multiple times)
    # Memory allocated for caching query results (default:0 (disabled))
    query_cache_size=0
    # Do not cache results (default:1)
    query_cache_type=0
    # Do not use the privileges mechanisms
    skip_grant_tables
    # Do not listen for TCP/IP connections at all
    skip_networking
    # The number of open tables for all threads. (default:64)
    table_open_cache=200
    # How many threads the server should cache for reuse (default:0)
    thread_cache_size=3
    # wait 365d before dropping the DB connection (default:8h)
    wait_timeout=31536000
    # We use InnoDB, so don't let MyISAM eat up memory
    key_buffer_size=16K
    [client]
    default-character-set=utf8
    # workaround fix for akonadi db error after server upgrade
    # http://forum.kde.org/viewtopic.php?f=215&t=101004
    [mysqld]
    binlog_format=row
    Test 9: SUCCESS
    akonadictl found and usable
    Details: The program '/usr/bin/akonadictl' to control the Akonadi server was found and could be executed successfully.
    Result:
    Akonadi 1.13.0
    Test 10: SUCCESS
    Akonadi control process registered at D-Bus.
    Details: The Akonadi control process is registered at D-Bus which typically indicates it is operational.
    Test 11: SUCCESS
    Akonadi server process registered at D-Bus.
    Details: The Akonadi server process is registered at D-Bus which typically indicates it is operational.
    Test 12: SKIP
    Protocol version check not possible.
    Details: Without a connection to the server it is not possible to check if the protocol version meets the requirements.
    Test 13: SUCCESS
    Resource agents found.
    Details: At least one resource agent has been found.
    Directory listing of '/usr/share/akonadi/agents':
    akonadibalooindexingagent.desktop
    akonadinepomukfeederagent.desktop
    akonotesresource.desktop
    archivemailagent.desktop
    birthdaysresource.desktop
    contactsresource.desktop
    davgroupwareresource.desktop
    facebookresource.desktop
    followupreminder.desktop
    googlecalendarresource.desktop
    googlecontactsresource.desktop
    icaldirresource.desktop
    icalresource.desktop
    imapresource.desktop
    invitationsagent.desktop
    kabcresource.desktop
    kalarmdirresource.desktop
    kalarmresource.desktop
    kcalresource.desktop
    kdeaccountsresource.desktop
    knutresource.desktop
    kolabproxyresource.desktop
    kolabresource.desktop
    localbookmarksresource.desktop
    maildirresource.desktop
    maildispatcheragent.desktop
    mailfilteragent.desktop
    mboxresource.desktop
    migrationagent.desktop
    mixedmaildirresource.desktop
    mtdummyresource.desktop
    newmailnotifieragent.desktop
    nntpresource.desktop
    notesresource.desktop
    openxchangeresource.desktop
    pop3resource.desktop
    sendlateragent.desktop
    vcarddirresource.desktop
    vcardresource.desktop
    Environment variable XDG_DATA_DIRS is set to '/usr/share:/usr/share:/usr/local/share'
    Test 14: ERROR
    Current Akonadi server error log found.
    Details: The Akonadi server reported errors during its current startup. The log can be found in <a href='/home/the_user/.local/share/akonadi/akonadiserver.error'>/home/the_user/.local/share/akonadi/akonadiserver.error</a>.
    File content of '/home/the_user/.local/share/akonadi/akonadiserver.error':
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    Test 15: ERROR
    Previous Akonadi server error log found.
    Details: The Akonadi server reported errors during its previous startup. The log can be found in <a href='/home/the_user/.local/share/akonadi/akonadiserver.error.old'>/home/the_user/.local/share/akonadi/akonadiserver.error.old</a>.
    File content of '/home/the_user/.local/share/akonadi/akonadiserver.error.old':
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    DATABASE ERROR:
    Error code: 1062
    DB error: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex'"
    Error text: "Duplicate entry '93-INBOX' for key 'CollectionTable_parentAndNameIndex' QMYSQL3: Unable to execute statement"
    Query: "INSERT INTO CollectionTable (remoteId, remoteRevision, name, parentId, resourceId, enabled, syncPref, displayPref, indexPref, cachePolicyInherit, isVirtual) VALUES (:0, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10)"
    Control process died, committing suicide!
    Test 16: ERROR
    Current Akonadi control error log found.
    Details: The Akonadi control process reported errors during its current startup. The log can be found in <a href='/home/the_user/.local/share/akonadi/akonadi_control.error'>/home/the_user/.local/share/akonadi/akonadi_control.error</a>.
    File content of '/home/the_user/.local/share/akonadi/akonadi_control.error':
    Executable "akonadi_nepomuk_feeder" for agent "akonadi_nepomuk_feeder" could not be found!
    Test 17: ERROR
    Previous Akonadi control error log found.
    Details: The Akonadi control process reported errors during its previous startup. The log can be found in <a href='/home/the_user/.local/share/akonadi/akonadi_control.error.old'>/home/the_user/.local/share/akonadi/akonadi_control.error.old</a>.
    File content of '/home/the_user/.local/share/akonadi/akonadi_control.error.old':
    Executable "akonadi_nepomuk_feeder" for agent "akonadi_nepomuk_feeder" could not be found!

    After Upload you can delete the Duplicates comparing the employee Id.
    It Will be Possible only when you use OLE and read the file records line by Line. I am not sure of the error Handling with OLE control. It may lead to some performance issues..

  • While trying to open the order in co03 or co02, It is not opening and showing the error message as "Inconsistent data : order 107434 16.09.2013, duplicate entry o 0020. is showing ?

    Hi,
    While trying to open the order in co03, co02, I am getting the below error message : Inconsistent data : order 107434 16.09.2013, duplicate entry o 0020 is showing. Please guide me the way forward.
    Regards,
    Mastan.

    Hi Mastan
    The report mentioned by Rajen should clear the inconsistent records.
    Please observe that you could have found this by yourself report with a quick note search for this error message or even searching for old threads.
    Even a google search points to the solution.
    You should do your own research before opening such kind of thread.
    BR
    Caetano

Maybe you are looking for

  • Error message when calling a Business Connector RFC destination from SRM

    Hello all, We are facing one problem with data Purchase Order transmission between SRM 5.0 (SRM_SERVER 5.5, extended classic scenario) and Business Connector 4.7. We made needed customizing in order to have "XML / XI" output medium available in tab '

  • HT1444 Upgrading from 10.5.8 to 10.6+ without an operating CD/DVD Drive?

    Hi there, I own an iMac Intel Core 2 Duo 2.66 GHz 4GB 800 MHz DDR2 SDRAM, Early 2009 model, on OS X 10.5.8 and I definitely need to upgrade it to at least 10.6, since many things such as Flash Player, YouTube and several Apps are not running on my sy

  • Problems with tutorial: Building an Application

    Hi, I made the step by step tutorial (Building an Application Part 1: Application Objects, http://developer.java.sun.com/developer/onlineTraining/new2java/divelog/part1/page8.jsp) There is a problem. If I compile the DiveLog.java Code as it calls on

  • Samsung LCD TV Mod. LN40A550P3F

    Multi problems #1   Most functions on Remote have stopped working.  I know signal is being rec'd by TV sensor because I can see sensor light flashing when I press remote button (Factory & DTV Remote). #2  Some controls on side of TV have also stopped

  • Recover a 36-hour-old file???

    So, I've done something stupid. My daughter recorded something into GarageBand on Saturday. That night, I was monkeying around with the application-I thought I'd saved and was working with a differently named file, but wasn't-and I... er... screwed u