SSIS Fast Load fails to copy correct number of rows

Step 1 - truncate destination table
on success
Using obdc source table source to a odbc destination use a fast table load to take three colums out of source, and copy to destination.
In the source, column 1 is the primary key (int)
Other two columns are time stamps
Destination table, column 1 is int (no keys) - does not allow nulls, column 2 & 3 allow nulls
Noramlly the rowcount in the source and destination tables match after a run. However, on occasions, the destination table count is less than the source table. On the destination odbc source, we enable identify insert and check constraints. I can't see how we'd drop rows since by definition the row needs to existing in the source (we're copying the primary key).
The first time this occurred, anecdotal information is that the source sql server was under memory stress.
Has anyone seen this behavior before? Any ideas on how to resolve it?
Ken

I just ran into this same issue.  After a solid half day of troubleshooting I found this little 'fast load' setting to be the culprit. 
We have a very simple copy operation taking rows out of an ODBC source, adding a column and then stuffing them into a destination ODBC source.  All the operations were run on a development machine with a local SQL Server install and plenty of RAM/CPU headroom.
I'd enabled logging of all kinds everywhere trying to detect the problem but nothing was tripped.  When run under debug mode (in dev studio) I see the correct number of rows (684 in this case) reported being sent to the ODBC destination; however, when I look in the table itself I only ever see the first one.
As soon as I turned off the fast load option then I started getting the full data set moving over properly.
At this point, I'm of a mind to go through and remove 'fast load' from every one of my packages.  I'll take reliability over speed any day of the week.

Similar Messages

  • PLSQL function does not return the correct number of rows?

    Hey folks. I'm still green when it comes to writing PLSQL. It's fun, rewarding and very frustrating. Hence, I'm turning to the experts. If you folks can help me understand what I'm doing wrong here, I'd really appreciate it.
    The code is somewhat specific to my company's product, but I think it should be easy to read and understand what I'm doing. If not, please let me know what I can clarify.
    All i'm trying to do is determine if the most recent iteration of data available for a particular host is a full scan or not (level2). I go about this in the following manner:
    1. get the operatingsystem id, it's scandate (preferred), the most recent scandate and it's scan status from a table of where all operating systems data lives. Loop through all the Oses
    (from this I set v_osid, v_mostrecentscandate, v_scandate).
    2. Before doing the crazy logic, pick the low hanging fruit
    2a. if the the level2 status of the host is N, then v_level2 = 'N';
    2b. if the level2 = 'Y' and the mostrecentscandate and scandate are identical, then v_level2 = 'Y';
    2c. for all other cases, go to 3
    3. Using v_mostrecentscandate, find all table id that may hold the most recent instance of data for the host
    4. Loop through through the concatenation of that id + _base. If you find the id in those tables, then store the id for the next step.
    5. When you I find the right id, I now concatenate the id + attrdata. For the host id, I look for any rows where attribute_value in (..) and the corresponding number_value is not null.
    5b. set v_level2 = 'Y'
    5c. otherwise, set v_level2 = 'N'
    6 end the loop
    7 wash, rinse, repeat for each OS.
    create or replace package body mostrecentlevel2 as
    function getMostRecentL2 return bdna_mostrecent_level2 pipelined IS
    v_lsid NUMBER;
    v_sql VARCHAR2(5000);
    v_sql_baseid NUMBER;
    v_sql_numv NUMBER;
    v_lsidt VARCHAR2(5000);
    v_lsidt2 VARCHAR2(5000);
    v_sql_rec VARCHAR2(5000);
    v_osid NUMBER;
    v_anchor DATE;
    v_ls CHAR(2);
    v_level2 CHAR(1);
    v_mostrecentscandate DATE;
    v_scandate DATE;
    cursor getOSinfo_cur is select operatingsystem_id, scandate, mostrecentscandate, level2 from bdna_all_os;
    cursor getlsID_cur is select id from local_scan where
              ((trunc(collect_start_time) - to_date(v_anchor))*24*60*60) <= ((to_date(v_mostrecentscandate) - to_date(v_anchor))*24*60*60)
              and
              ((trunc(collect_end_time) - to_date(v_anchor))*24*60*60) >= ((to_date(v_mostrecentscandate) - to_date(v_anchor))*24*60*60);
    getOSinfo_rec getOSinfo_cur%rowtype;
    getlsID_rec getlsID_cur%rowtype;
    BEGIN
    v_ls := 'ls';
    v_anchor := '01-JAN-01';
    FOR getOSinfo_rec IN getOSinfo_cur LOOP
         v_osid := getOSinfo_rec.operatingsystem_id;
         v_mostrecentscandate := getOSinfo_rec.mostrecentscandate;
         v_scandate := getOSinfo_rec.scandate;
         IF getOSinfo_rec.level2 = 'N' THEN
              v_level2 := 'N';
         ELSIF getOSinfo_rec.level2 = 'Y' THEN
              IF v_mostrecentscandate != v_scandate THEN
                   FOR getlsID_rec IN getlsID_cur LOOP
                        v_lsid := getlsID_rec.id;
                        v_lsidt := v_ls||v_lsid;
                        v_sql := 'select id from '||v_lsidt||'_base where id = '||chr(39)||v_osid||chr(39);
                        EXECUTE IMMEDIATE v_sql into v_sql_baseid;
                        IF SQL%ROWCOUNT > 0 THEN
                             v_lsidt2 := v_lsidt;
                             v_sql := '';
                        END IF;
                   END LOOP;
                   v_sql := 'select number_value from '||v_lsidt2||'_attr_data where
                             lower(attribute_name) IN ('||chr(39)||'numcpus'||chr(39)||', '||chr(39)||'totalmemory'||chr(39)||', '||chr(39)||'cpuutilpercent'||chr(39)||', '||chr(39)||'numprocesses'||chr(39)||')
                             and
                             number_value is not NULL
                             and
                             element_id = '||chr(39)||v_osid||chr(39);
                   EXECUTE IMMEDIATE v_sql into v_sql_numv;
                   IF SQL%ROWCOUNT > 0 THEN
                        v_level2 := 'Y';
                   ELSE v_level2 := 'N';
                   END IF;
              END IF;
              v_level2 := 'Y';
         END IF;
         PIPE ROW (mostRecentLevel2Format(v_osid,v_mostrecentscandate,v_level2));
    END LOOP;
    END;
    END;
    /Now some will ask why I'm using pipelining? Again, I'm green.. I was reading around, looking for a way to make this code run as fast as possible (because it's potentially got to go through 56K records and perform the expensive work on).
    I also realize I'm not providing the type or package code, and that's because I think I'm good on that. The code above compiles just fine without errors and when it runs, it only returns 6 consecutive rows.. I'm expecting 70K lol. So I know I'm doing something wrong.
    Any thoughts?
    Oh forgot to add this is on 11g R1 Enterprise Edition
    Edited by: ErrolDC on Nov 14, 2011 4:52 PM
    Edited by: ErrolDC on Nov 14, 2011 5:07 PM

    ErrolDC wrote:
    Hey folks. I'm still green when it comes to writing PLSQL. It's fun, rewarding and very frustrating. Hence, I'm turning to the experts. If you folks can help me understand what I'm doing wrong here, I'd really appreciate it.
    The code is somewhat specific to my company's product, but I think it should be easy to read and understand what I'm doing. If not, please let me know what I can clarify.Post a complete script that peoople who aren't as familiar with the application as you are can run to re-create the problem and test their ideas. In this case, that includes CREATE TABLE and INSERT statements for the tables used (just the columns needed for this job), a query that uses the function, and the results you want from that query given the data you posted.
    All i'm trying to do is determine if the most recent iteration of data available for a particular host is a full scan or not (level2). I go about this in the following manner:
    1. get the operatingsystem id, it's scandate (preferred), the most recent scandate and it's scan status from a table of where all operating systems data lives. Loop through all the Oses
    (from this I set v_osid, v_mostrecentscandate, v_scandate).
    2. Before doing the crazy logic, pick the low hanging fruit
    2a. if the the level2 status of the host is N, then v_level2 = 'N';
    2b. if the level2 = 'Y' and the mostrecentscandate and scandate are identical, then v_level2 = 'Y';
    2c. for all other cases, go to 3
    3. Using v_mostrecentscandate, find all table id that may hold the most recent instance of data for the host
    4. Loop through through the concatenation of that id + _base. If you find the id in those tables, then store the id for the next step.
    5. When you I find the right id, I now concatenate the id + attrdata. For the host id, I look for any rows where attribute_value in (..) and the corresponding number_value is not null.
    5b. set v_level2 = 'Y'
    5c. otherwise, set v_level2 = 'N'
    6 end the loop
    7 wash, rinse, repeat for each OS.
    create or replace package body mostrecentlevel2 as
    function getMostRecentL2 return bdna_mostrecent_level2 pipelined IS
    v_lsid NUMBER;
    v_sql VARCHAR2(5000);
    v_sql_baseid NUMBER;
    v_sql_numv NUMBER;
    v_lsidt VARCHAR2(5000);
    v_lsidt2 VARCHAR2(5000);
    v_sql_rec VARCHAR2(5000);
    v_osid NUMBER;
    v_anchor DATE;
    v_ls CHAR(2);
    v_level2 CHAR(1);
    v_mostrecentscandate DATE;
    v_scandate DATE;
    cursor getOSinfo_cur is select operatingsystem_id, scandate, mostrecentscandate, level2 from bdna_all_os;
    cursor getlsID_cur is select id from local_scan where
              ((trunc(collect_start_time) - to_date(v_anchor))*24*60*60) <= ((to_date(v_mostrecentscandate) - to_date(v_anchor))*24*60*60)
              and
              ((trunc(collect_end_time) - to_date(v_anchor))*24*60*60) >= ((to_date(v_mostrecentscandate) - to_date(v_anchor))*24*60*60);
    getOSinfo_rec getOSinfo_cur%rowtype;
    getlsID_rec getlsID_cur%rowtype;
    BEGIN
    v_ls := 'ls';
    v_anchor := '01-JAN-01';
    FOR getOSinfo_rec IN getOSinfo_cur LOOP
         v_osid := getOSinfo_rec.operatingsystem_id;
         v_mostrecentscandate := getOSinfo_rec.mostrecentscandate;
         v_scandate := getOSinfo_rec.scandate;
         IF getOSinfo_rec.level2 = 'N' THEN
              v_level2 := 'N';
         ELSIF getOSinfo_rec.level2 = 'Y' THEN
              IF v_mostrecentscandate != v_scandate THEN
                   FOR getlsID_rec IN getlsID_cur LOOP
                        v_lsid := getlsID_rec.id;
                        v_lsidt := v_ls||v_lsid;
                        v_sql := 'select id from '||v_lsidt||'_base where id = '||chr(39)||v_osid||chr(39);
                        EXECUTE IMMEDIATE v_sql into v_sql_baseid;
                        IF SQL%ROWCOUNT > 0 THEN
                             v_lsidt2 := v_lsidt;
                             v_sql := '';
                        END IF;
                   END LOOP;
                   v_sql := 'select number_value from '||v_lsidt2||'_attr_data where
                             lower(attribute_name) IN ('||chr(39)||'numcpus'||chr(39)||', '||chr(39)||'totalmemory'||chr(39)||', '||chr(39)||'cpuutilpercent'||chr(39)||', '||chr(39)||'numprocesses'||chr(39)||')
                             and
                             number_value is not NULL
                             and
                             element_id = '||chr(39)||v_osid||chr(39);
                   EXECUTE IMMEDIATE v_sql into v_sql_numv;
                   IF SQL%ROWCOUNT > 0 THEN
                        v_level2 := 'Y';
                   ELSE v_level2 := 'N';
                   END IF;
              END IF;
              v_level2 := 'Y';
         END IF;
         PIPE ROW (mostRecentLevel2Format(v_osid,v_mostrecentscandate,v_level2));
    END LOOP;
    END;
    END;
    /Now some will ask why I'm using pipelining? Again, I'm green.. I was reading around, looking for a way to make this code run as fast as possible (because it's potentially got to go through 56K records and perform the expensive work on).
    I also realize I'm not providing the type or package code, and that's because I think I'm good on that. The code above compiles just fine without errors and when it runs, it only returns 6 consecutive rows.. I'm expecting 70K lol. So I know I'm doing something wrong. You're calling TO_DATE with a DATE argument. Why are you calling TO_DATE at all?
    It seems like this condition:
    ((trunc(collect_start_time) - to_date(v_anchor))*24*60*60) <= ((to_date(v_mostrecentscandate) - to_date(v_anchor))*24*60*60) is equivalent to
    TRUNC (collect_start_time) <= v_mostrecentscandateThat probably has nothing to do with why you're only getting 6 rows.

  • OLEDB Destination Fast Load Auto Truncation

    I have one Data Flow task with OLEDB Source(for SQL Server) and OLEDB Destination(for SQL Server).
    In OLEDB destination I have used Fast Load option. Also I handled error rows to move to error table.
    In my source I have with columns column1(int) , Column2(Varchar(10)).
    Destination has two column column1(int) , Column2(Varchar(7)).
    Error table has 4 columns column1(int) , Column2(Varchar(10)), Error number(nvarchar(max)), Error Description(varchar(max))
    In source, column2 values are more than 7 charactors.
    When I run the package, destination loading only with first 7 charactors for Column2.Truncation happening for remaining charactors.I dont want to do like this.If the destination length is suufficient then the entire row should be moved
    to my error table.
    When I check Error tab option of destination Even Truncate also in disabled state to enable the redirect row.

    Truncation must raise an error and the error rows should redirect in your scenario, so what you described does not make sense.
    No, when writing to SQL Server the data will be truncated, without error.
    The only thing you get is a warning when designing the package.
    You get truncation errors when you try to put data longer than the column width in the data flow buffer, i.e. at the source or at transformations, but not at the destination apparently.
    @Prabu: you can check the length using a conditional split. If the lenght is too long, redirect the row yourself to the error table.
    MCSE SQL Server 2012 - Please mark posts as answered where appropriate.

  • New table without statistics returns invalid number of rows

    Hi,
    I've been searching for a while now for an explanation for the following "problem"
    We have an Oracle 11.1.0.7 database on AIX5.3
    In this database we have two tables, called KRT_PRODUCTS_INFO and KRT_STRUCTURES_INFO ( the table name don't really matter ).
    The scenario is as following:
    If we recreate these tables like:
    CREATE TABLE KRT_PRODUCT_INFO_BUP AS SELECT * FROM KRT_PRODUCT_INFO;
    DROP TABLE KRT_PRODUCT_INFO CASCADE CONSTRAINTS;
    CREATE TABLE KRT_PRODUCT_INFO (...) TABLESPACE PIM_DATA NOLOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING;
    CREATE INDEX KRT_PRODUCT_INFO_X1 ON KRT_PRODUCT_INFO (PRODUCT_NUMBER) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_PRODUCT_INFO_X2 ON KRT_PRODUCT_INFO (PIM_ARTICLEREVISIONID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    INSERT INTO KRT_PRODUCT_INFO (SELECT * FROM KRT_PRODUCT_INFO_BUP);
    COMMIT;
    CREATE TABLE KRT_STRUCTURE_INFO_BUP AS SELECT * FROM KRT_STRUCTURE_INFO;
    DROP TABLE KRT_STRUCTURE_INFO CASCADE CONSTRAINTS;
    CREATE TABLE KRT_STRUCTURE_INFO (...) TABLESPACE PIM_DATA NOLOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING;
    CREATE INDEX KRT_STRUCTURES_X1 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_REV_ID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_STRUCTURES_X2 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_IDENTIFIER) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    CREATE INDEX KRT_STRUCTURES_X3 ON KRT_STRUCTURE_INFO (STRUCTURE_GRP_ID) NOLOGGING TABLESPACE PIM_DATA NOPARALLEL;
    INSERT INTO KRT_STRUCTURE_INFO (SELECT * FROM KRT_STRUCTURE_INFO_BUP);
    COMMIT;
    and we run a complex query with these two tables, this query only return a couple of rows ( exactly 24 !!! )
    If we however generate statistics on these tables after creation, the correct number of rows is returned, being 1.167.991 rows
    The statistics are gathered using:
    BEGIN
    SYS.DBMS_STATS.GATHER_TABLE_STATS (
    OwnName => 'PIM_KRG'
    ,TabName => 'KRT_PRODUCT_INFO'
    ,Estimate_Percent => NULL
    ,Method_Opt => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree => NULL
    ,Cascade => TRUE
    ,No_Invalidate => FALSE);
    END;
    BEGIN
    SYS.DBMS_STATS.GATHER_TABLE_STATS (
    OwnName => 'PIM_KRG'
    ,TabName => 'KRT_STRUCTURE_INFO'
    ,Estimate_Percent => NULL
    ,Method_Opt => 'FOR ALL COLUMNS SIZE REPEAT '
    ,Degree => NULL
    ,Cascade => TRUE
    ,No_Invalidate => FALSE);
    END;
    /I can imagine that the 'plan' for the query used is wrong because of missing statistics.
    But I can't imagine that it would actually return an incorrect number of rows.
    I tested this behaviour in Toad and sqlplus ( first thought it was Toad ), and both behave the same.
    Another fact is, that the "problem" is NOT reproducable on our TEST environment, that runs on Oracle 11.1.0.7 on Windows2008
    Just to be sure this is the "complex" query used. It is not developed by me, and I think it looks somewhat strange but that shouldn't matter:
    SELECT sr."Identifier" STRUCTURE_IDENTIFIER
    , ar_i."Identifier" ITEM_NUMBER
    , SUM (REPLACE (NVL (s.HIDE_LE10, 0) + NVL (p.HIDE_LE10, 0), 2, 1))
    hide_le10
    , SUM (REPLACE (NVL (s.HIDE_LE30, 0) + NVL (p.HIDE_LE30, 0), 2, 1))
    hide_le30
    , SUM (REPLACE (NVL (s.HIDE_LE40, 0) + NVL (p.HIDE_LE40, 0), 2, 1))
    hide_le40
    , SUM (REPLACE (NVL (s.HIDE_LE50, 0) + NVL (p.HIDE_LE50, 0), 2, 1))
    hide_le50
    , SUM (REPLACE (NVL (s.HIDE_LE55, 0) + NVL (p.HIDE_LE55, 0), 2, 1))
    hide_le55
    , SUM (REPLACE (NVL (s.HIDE_LE60, 0) + NVL (p.HIDE_LE60, 0), 2, 1))
    hide_le60
    , SUM (REPLACE (NVL (s.HIDE_LE70, 0) + NVL (p.HIDE_LE70, 0), 2, 1))
    hide_le70
    , SUM (REPLACE (NVL (s.HIDE_LE75, 0) + NVL (p.HIDE_LE75, 0), 2, 1))
    hide_le75
    , SUM (REPLACE (NVL (s.HIDE_LE58, 0) + NVL (p.HIDE_LE58, 0), 2, 1))
    hide_le58
    , SUM (REPLACE (NVL (s.HIDE_LE80, 0) + NVL (p.HIDE_LE80, 0), 2, 1))
    hide_le80
    , SUM (REPLACE (NVL (s.HIDE_LE90, 0) + NVL (p.HIDE_LE90, 0), 2, 1))
    hide_le90
    , SUM (REPLACE (NVL (s.HIDE_LE92, 0) + NVL (p.HIDE_LE92, 0), 2, 1))
    hide_le92
    , SUM (REPLACE (NVL (s.HIDE_LE94, 0) + NVL (p.HIDE_LE94, 0), 2, 1))
    hide_le94
    , SUM (REPLACE (NVL (s.HIDE_LE96, 0) + NVL (p.HIDE_LE96, 0), 2, 1))
    hide_le96
    , COUNT (*) cnt
    FROM KRAMP_HPM_MAIN."StructureRevision" sr
    , KRAMP_HPM_MAIN."StructureGroupRevision" sgr
    , KRAMP_HPM_MASTER."ArticleStructureMap" asm
    , KRAMP_HPM_MASTER."ArticleRevision" ar_p
    , KRAMP_HPM_MASTER."ArticleDetail" ad_p
    , KRAMP_HPM_MASTER."ArticleRevision" ar_i
    , KRAMP_HPM_MASTER."ArticleDetail" ad_i
    , KRAMP_HPM_MASTER."ArticleReference" ar
    , KRT_STRUCTURE_INFO s
    , KRT_PRODUCT_INFO p
    WHERE sr."StructureID" = sgr."StructureID"
    AND sgr."StructureGroupID" = asm."StructureGroupID"
    AND ar_p."ID" = asm."ArticleRevisionID"
    AND ar_p."ID" = ad_p."ArticleRevisionID"
    AND ad_p."Res_Text100_02" = 'PRODUCT'
    AND ar_i."ID" = ad_i."ArticleRevisionID"
    AND ad_i."Res_Text100_02" = 'ARTICLE'
    AND ar."ArticleRevisionID" = ar_p."ID"
    AND ar."ReferencedSupplierAID" = ar_i."Identifier"
    AND s.STRUCTURE_GRP_REV_ID = sgr."ID"
    AND p.PIM_ARTICLEREVISIONID = ar_p."ID"
    GROUP BY sr."Identifier", ar_i."Identifier";Any ideas are welcome..
    Thanks
    FJFranken

    Hemant K Chitale wrote:
    These two tables are in the PIM_KRG schema while the other tables in the query are distributed across two other schemas "KRAMP_HPM_MAIN" and "KRAMP_HPM_MASTER" ?
    Do you happen to have the same table names occurring in multiple schemas - the query is then referencing the data in the wrong schema ?
    Hemant K ChitaleHi,
    This is not the case. The KRAMP_HPM schema's are application dedicated schema's
    And this also then does not explain why the results are correct after generating statistics.
    Anyway thanks for the tip.
    FJFranken

  • Number of rows in a report, can it only bee set at creation time?

    I'm unable to change how pagiation work for a report. If I create a report with the default setting, it will show 15 rows as expected. However if I edit the page, go into the report and change the "Number of Rows" it seems to have no effect. I set it to 999 but the report still only shows the first 15 rows. If I delete the report, recreate it and select 999 as the number of rows when I create the report it works perfect. Looking at the edit page the only change I can see is that "Number of Rows" now is set to 999. How come it has no effect when I do it after creation time?
    Is this a bug? Or am I doing this the wrong way? Is there another workaround than recreating the report.

    Hi Marius,
    I faced the same issue couple of times before. I changed the number of rows to show but even then it showed 15 rows. I refreshed the report, but no change. Then I logged out and closed the browser, re-opened it and ran the report again, now it showed correct number of rows. But I never needed to recreate a report for this reason.
    Zahid

  • How do you change the number of rows returned by an advanced datagrid...Only displays 1000 rows....I

    I am using coldfusion to query an oracle table....Query returns approximately 2000 rows, however the max rows the datagrid will display is 1000.  Does anybody know how to change this...THANKS!!

    Thanks so much for your answer that sounds like a good idea...
    I found the answer to my initial problem...In my coldfusion I had set maxrows = 1000.  So the datagrid was always returning 1000 records even when a lot more than that should have been returned...When I removed the maxrows parameter in the coldfusion code, my datagrid was now been populated with the correct number of rows...
    thanks again for your help,
    Ronnie Raigrodski (Aka -- FlexNerd)

  • Fast load

    Hi,
    I must load millions of records in a table with an XMLType field.
    Using SQL * Loader conventional path to load data and I would like to reduce the time
    loading (several hours to upload a file of 22 GB data per single
    partition in a table that has 10 partitions).
    there is a method to load the data more fast?
    the tables dosn't have XmlSchema, I use bynary xml as storage field
    and I want that the XML data is validated at load time (for these reason I don't use
    direct path)
    my version is as follows:
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionThanks

    OK, Doing a quick test I see the following issues..
    1. Error Messages are not display correctly
    2. Bad File is not generated correctly
    SQL> host type sqlldr.log
    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Nov 29 08:41:37 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Control File:   testcase.ctl
    Data File:      testcase.dat
      Bad File:     testcase.bad
      Discard File:  none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Continuation:    none specified
    Path used:      Direct
    Table T1, loaded from every logical record.
    Insert option in effect for this table: APPEND
       Column Name                  Position   Len  Term Encl Datatype
    FILENAME                            FIRST   120           CHARACTER
      (FILLER FIELD)
    XMLDATA                           DERIVED     *  EOF      CHARACTER
        Dynamic LOBFILE.  Filename in field FILENAME
    Parse Error on row 2 in table T1
    OCI-31061: Message 31061 not found;  product=RDBMS; facility=OCI
    ; arguments: [XML event error]
    OCI-19202: Error occurred in XML processing
    In line 1 of orastream:
    LPX-00225: end-element tag "E2" does not match start-element tag "E1"
    The following index(es) on table T1 were processed:
    index SQLLDR.SYS_C0011395 loaded successfully with 2 keys
    Table T1:
      2 Rows successfully loaded.
      1 Row not loaded due to data errors.
      0 Rows not loaded because all WHEN clauses were failed.
      0 Rows not loaded because all fields were null.
    Bind array size not used in direct path.
    Column array  rows :       1
    Stream buffer bytes:  256000
    Read   buffer bytes: 1048576
    Total logical records skipped:          0
    Total logical records read:             3
    Total logical records rejected:         1
    Total logical records discarded:        0
    Total stream buffers loaded by SQL*Loader main thread:        3
    Total stream buffers loaded by SQL*Loader load thread:        0
    Run began on Mon Nov 29 08:41:37 2010
    Run ended on Mon Nov 29 08:41:37 2010
    Elapsed time was:     00:00:00.53
    CPU time was:         00:00:00.02
    SQL> --
    SQL> host type testcase.bad
    The system cannot find the file specified.
    SQL> --
    SQL> quit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64
    bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    C:\xdb\bugs\sqlldr>Where as with Conventional load all works as expected
    SQL> host sqlldr -userid=&USERNAME/&PASSWORD -control=testcase.ctl -bad=testcase
    .bad log=sqlldr.log
    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Nov 29 08:43:59 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Commit point reached - logical record count 2
    Commit point reached - logical record count 3
    SQL> --
    SQL> host type sqlldr.log
    SQL*Loader: Release 11.2.0.2.0 - Production on Mon Nov 29 08:43:59 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Control File:   testcase.ctl
    Data File:      testcase.dat
      Bad File:     testcase.bad
      Discard File:  none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array:     64 rows, maximum of 256000 bytes
    Continuation:    none specified
    Path used:      Conventional
    Table T1, loaded from every logical record.
    Insert option in effect for this table: APPEND
       Column Name                  Position   Len  Term Encl Datatype
    FILENAME                            FIRST   120           CHARACTER
      (FILLER FIELD)
    XMLDATA                           DERIVED     *  EOF      CHARACTER
        Dynamic LOBFILE.  Filename in field FILENAME
    Record 2: Rejected - Error on table T1.
    ORA-31061: XDB error: XML event error
    ORA-19202: Error occurred in XML processing
    In line 1 of orastream:
    LPX-00225: end-element tag "E2" does not match start-element tag "E1"
    Table T1:
      2 Rows successfully loaded.
      1 Row not loaded due to data errors.
      0 Rows not loaded because all WHEN clauses were failed.
      0 Rows not loaded because all fields were null.
    Space allocated for bind array:                   8320 bytes(64 rows)
    Read   buffer bytes: 1048576
    Total logical records skipped:          0
    Total logical records read:             3
    Total logical records rejected:         1
    Total logical records discarded:        0
    Run began on Mon Nov 29 08:43:59 2010
    Run ended on Mon Nov 29 08:43:59 2010
    Elapsed time was:     00:00:00.24
    CPU time was:         00:00:00.02
    SQL> --
    SQL> host type testcase.bad
    BADFILE1.xml
    SQL> --
    SQL> quit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64
    bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    C:\xdb\bugs\sqlldr>

  • Package load Failed! Invalid package Title, manifest file cannot be found

    Greetings. I am new to UPK and am seeking guidance on an upgrade to 11.1.
    A little background on what we are trying to do:
    We have two machines:
    Windows 2003 machine in domain 'X' | UPK 11.0.0.1.
    Windows 2008 R2 virtual machine in domain 'Y' | UPK 11.0 on the new machine first, install the 11.0.0.1 patch, and then upgrade to 11.1.
    We are using the same database as the old machine. Developer Server is Standard Authentication, Knowledge Center is Windows authentication.
    ==========================================================================================
    So far we have installed 11.1 with the respective database upgrades, but are lost as to how to migrate the old Content Root data. We have copied all the content to the new Content Root, added it to IIS like the old machine, and verified that the UPK database Content Root path is correct and working.
    For the rest of the post, I am remoted into the server from my local workstation, and then using IE to access KCenter on the server's FQDN, not via localhost.
    Since I am green, so I presumed that I would need to import the titles under Manager in KCenter. I zipped each content folder individually (they are named "1", "9", etc), and then attempted to import each zip file.
    NOTE: At this stage sometimes we are prompted to authenticate again. When it does, sometimes the credentials that we know work, get rejected and we are prompted again. This will repeat until we get a 401 and we have to start over. We have verified that the Windows account currently in session with the site is valid and has administrator authority within KC.
    NOTE2: On the first few tries, .NET complained that maxRequestLength was not large enough. I had our servers team increase it on the actual server in order for us to proceed.
    Provided we don't have to authenticate again, or that it actually accepts our credentials, we get to the 30% mark and then receive the error: *"Package load Failed! Invalid package Title, manifest file cannot be found"*
    Does anyone have any wisdom for this process? The title importation section in the deployment manual is not helping, and the administrator from whom I am taking over this software has not had to deal with this before.
    Thank you,
    Ian
    Edited by: 986290 on Feb 5, 2013 10:21 AM

    Hi Marc, thanks for the post,
    I have gone through an extensive troubleshooting process with Oracle, including their development team, in order to fully complete our project setup. As a supplemental bit of information, we also were having connection errors when attempting to publish directly to the knowledge center.
    As far as the context of this post, here is the solution in brief:
    1) Verify that the package being imported was published from the same version of the Developer Client as the Knowledge Center installation's.
    2) Check for database consistency issues. In our case we were using a database that was built in another domain. After numerous attempts with Oracle support on the line to determine the inconsistency, it was decided to completely reinstall and build a new database from scratch in the new domain.
    3) Check application pool identities and access (while we had some inconsistencies here, correcting them did not change the behavior of the import/publish errors)
    In review, a lot of what we had setup was correct. Our primary point of failure, we feel, was using the old database. Technically this should not have been a problem, but Murphy likes to get his way sometimes.
    Cheers,
    Ian

  • Material Master Data load failed

    Hi All,
    I am loading material master data. I have around 8055 records in R/3. The load has failed with an error "Non-updated Idocs found in Business Information Warehouse" and asking me to process the IDocs manually. I have checked in WE05 for the IDocs and found 3 Idocs with status 64(IDoc ready to be transferred to application).
    But when I checked in the manage screen of the 0MATERIAL I could find the Transfered and Updated records as 8055. I have even checked the data in 0MATERIAL and found that all the data(8055 records) has already been uploaded.
    Why is it still showing an error(load failed) even when all the data has been uploaded? What should I do now?
    Best Regards,
    Nene.

    hi Nene/Sankar,
    for material text no language please check Note 546346 - Material texts: no selection on languages
    and for why idocs problem, check e.g Note 561880 - Requests hang because IDocs are not processed, Note 555229 - IDocs hang in status 64 for tRFC with immediate processing.
    hope this helps.
    Note 546346 - Material texts: no selection on languages
    Summary
    Symptom
    When loading material texts from R/3 into the Business Information Warehouse, you cannot select languages.
    Other terms
    DataSource, 0MATERIAL_TEXT, InfoSource, InfoObject, 0MATERIAL, SPRAS, LANGU, 0LANGU, MAT_BW, extraction, selection field, delta extraction, ALE, delta, change pointer
    Reason and Prerequisites
    As of PI/PI-A 2001_2 Support Package 6, the selection option for the 'SPRAS' field in the OLTP system was undone in the 0MATERIAL_TEXT DataSource. (Refer here to note 505952).
    Solution
    As of PI/PI-A 2002_1 Support Package 4, the 'SPRAS' field is provided for selection again with the 0MATERIAL_TEXT DataSource in the OLTP system. When loading the material texts from BW, the language is still not provided for selection in the scheduler, instead all languages of the language vector in BW are implicitly requested from the source system. However, during the delta update in the source system, the change pointers for all languages in the source system are now set to processed, regardless of whether the language was requested by BW or not.
    Import
    Support Package 4 for PI/PI-A 2002_1_31I - PI/PI-A 2002_1_45B
    Support Package 4 for PI 2002_1_46B - PI 2002_1_46C
    Support Package 3 for PI 2002_1_470
    In transaction RSA5, copy the D version of the 0MATERIAL_TEXT DataSource to the A version.
    Then replicate the DataSources for 0MATERIAL in the BW system. The 'SPRAS' field is then flagged again as a selection field in the transfer structure of the 0MATERIAL InfoSource. The transfer rules remain unchanged. Activate the transfer rules and perform a delta initialization again.
    Note 561880 - Requests hang because IDocs are not processed
    Symptom
    Data extraction in a BW or SEM BW system from an external OLTP System (such as R/3) or an internal (via DataMart) OLTP System hangs with the 'Yellow' status in the load monitor.
    After a timeout, the request status finally switches to 'Red'.
    Information IDocs with the status '64' are displayed in the 'Detail' tab.
    Other terms
    IDoc, tRFC, ALE, status 64
    Reason and Prerequisites
    Status information on load requests is transferred in the form of IDocs.
    IDocs are processed in the BW ALE model using tRFC in online work processes (DIA).
    IDoc types for BW (RSINFO, RSRQST, RSSEND) are processed immediately.
    If no free online work process is available, the IDocs remain and must then be restarted to transfer the request information.With the conversion to asynchronous processing, it can often happen that no DIA is available for tRFC for a short period of time (see note 535172).
    The IDoc status 64 can be caused by other factors such as a rollback in the application updating the IDocs. See the relevant notes.
    Furthermore, you can also display these IDocs after the solution mentioned below, however, this is only intended as information.
    You must therefore analyze the status text.
    Solution
    We recommend asynchronous processing for Business Warehouse.
    To do this, you need the corrections from note 535172 as well as note 555229 or the relevant Support Packages.
    The "BATCHJOB" entry in the TEDEF table mentioned in note 555229 is generated automatically in the BW system when you import Support Package 08 for BW 3.0B (Support Package 2 for 3.1 Content).For other releases and Support Package levels, you must manually implement the entry via transaction SE16.
    Depending on the Basis Support Package imported, you may also have to implement the source code corrections from note 555229.
    The following basic recommendations apply in avoiding bottlenecks in the dialog processing and checking of IDocs for BW:
    1. Make sure there is always sufficient DIA, that is, at least 1 DIA more than all other work processes altogether, for example, 8 DIA for a total of 15 work processes (see also note 74141).
       TIP: 2 UPD process are sufficient in BW, BW does not need any UP2.
    2. Unprocessed Info IDocs should be processed manually within the request in BW;in the 'Detail' tab, you can start each IDoc again by selecting 'Update manually' (right mouse button).
    3. Use BD87 to check the system daily (or whenever a problem arises) for IDocs that have not yet been processed and reactivate if necessary.
    However, make sure beforehand that these IDocs can actually be assigned to the current status of requests.
       TIP: Also check transaction SM58 for problematic tRFC entries.
    IMPORTANT: Notes 535172, 555229 and the above recommendations are relevant (unless otherwise specified) both for BW and for SAP source systems.

  • Master data load failed

    Hi Experts,
    One of my master data full load failed, see the below erro message in status tab,
    Incorrect data records - error requests (total status RED)
    Diagnosis
    Data records were recognized as incorrect.
    System response
    The valid records were updated in the data target.
    The request was marked as incorrect so that the data that was already updated cannot be used in reporting.
    The incorrect records were not written in the data targets but were posted retroactively under a new request number in the PSA.
    Procedure
    Check the data in the error requests, correct the errors and post the error requests. Then set this request manually to green.
    can any one please give me solution.
    Thanks
    David

    HI,
    I am loading the data from application server,  i dont have R/3 for this load. and the below message is showing in status tab;
    Request still running
    Diagnosis
    No errors could be found. The current process has probably not finished yet.
    System response
    The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
    and/or
    the maximum wait time for this request has not yet run out
    and/or
    the batch job in the source system has not yet ended.
    Current status
    Thanks
    David

  • Solaris boot images fail with bad magic number

    hi guys, have had Solaris on boxes for allmost 10 years. I have never installed sol10, and I have a problem
    1. I download the zip file, then use winrar to unzip them..
    2. burt the image with nero 7
    when my SunBlade 1000 comes up I do the Stop-A, and from the Ok promt
    I do boot cdrom
    this fails with "bad magic number" I have redown loaded it and reburnt it with nero reinstalled
    and also changed out the DVD on SunBlade
    3 I found a copy of Solaris 8 and the I got back a 'bitch' where is Solaris 9 so, most likely the drive is OK
    HELP
    Cris Harrison

    Hello Cris,
    unfortunately I don't understand your last sentence !
    I found a copy of Solaris 8 and the I got back a '*****' where is Solaris 9 so, most likely the drive is OK
    If this was a Solaris 8 (7/01 or later) DVD that did successfully boot, your DVD drive firmware (assuming that this is the Sun Toshiba SD-M1401) is up-to-date. If the DVD drive has firmware 1007, an update to 1009 is required to boot from DVD. otherwise you won't be able to boot from DVD (boot from CD works and a DVD can be automounted/mounted).
    Partial output of probe-scsi
    Before update:
    Unit 0 Removable Read Only device TOSHIBA DVD-ROM SD-M14011007
    After update:
    Unit 0 Removable Read Only device TOSHIBA DVD-ROM SD-M14011009
    [*Patch 111649-04 - Toshiba DVD 1401 firmware update*|http://sunsolve.sun.com/search/advsearch.do?collection=PATCH&type=collections&queryKey5=111649&toDocument=yes]
    when my SunBlade 1000 comes up I do the Stop-A, and from the Ok promt ...
    Instead of trying to directly boot, disable auto-boot and retry after a clean power-on.
    Break with Stop-A
    setenv auto-boot? false
    reset-all
    boot cdrom
    Michael

  • DATE_DIFF(ZPDATE,ZBDATE) Load fails

    Please help me resolve this problem i just want to calculate the difference in number of days between two dates. will reall be thankful to SDN and all of you.
    1. I am tring to load the data from ECC where i have Purchase date field zpdate and
    buy date field zbdate now when i try use the formula
    DATE_DIFF(ZPDATE,ZBDATE) Load fails since the format in ecc is ddmmyyyy and this formula look for yyyymmdd, so i try changing the datasource field conversion to SDATE but still it is failing can u let me know the solution
    2. I tried writing the Correct ABAP code but it did not work can u suggest here
    <b>
    Code for zpdate:</b>
    data: zmonth(2) type c,
    zyear(4) type c.
    data: zday(2) type c.
    zday = SOURCE_FIELDS-/BIC/ZPDATE+4(2).
    zmonth = SOURCE_FIELDS-/BIC/ZPDATE+4(2).
    zyear = SOURCE_FIELDS-/BIC/ZPDATE+0(4).
    concatenate zyear zmonth zday INTO RESULT
    Code for ZBDATE
    Same as above except SOURCE_FIELDS-/BIC/ZBDATE
    Then Formula in Tranformation
    DATE_DIFF(ZPDATE,ZBDATE)
    Please help me fix this
    its failing with error
    @5C@     An error occured in a function of the formula     @35@
    Please help me with correct solution
    Thanks
    Soniya

    <i>
    zday = SOURCE_FIELDS-/BIC/ZPDATE+4(2).
    zmonth = SOURCE_FIELDS-/BIC/ZPDATE+4(2).
    zyear = SOURCE_FIELDS-/BIC/ZPDATE+0(4).
    </i>
    Instead of zday = SOURCE_FIELDS-/BIC/ZPDATE+4(2).
    try zday = SOURCE_FIELDS-/BIC/ZPDATE+6(2).

  • Windows 8.1 SOFTWARE registry hive load failed on Windows Server 2012

    Hello,
    I am participated in custom Windwows Software backup/restore project development that requires load of Windows SOFTWARE/SYSTEM registry hives from target OS system drive connected to Windows system.
    On all Windows version except Windows 8.1 program works correctly but when host system (on that programm run on) is Windows Server 2012 or Windows 8 and target system is Windows 8.1 registry hive load failed with following error: 
    Failed to load f:\Windows\System32\config\software: [1009] The configuration registry database is corrupt.
    After run of 'chkdsk /r' error still remained. All requred security privileges (SE_BACKUP, SE_RESTORE) are applied. All systems are 64-bit.
    Generally even system registry editor (regedit) could not open SOFTWARE hive  from Windows 8.1 with following error:
    Cannot Load f:\Windows\System32\config\software: Error while loading hive.
    But when host system is Windows 7 or Windwos Server 2008 then SOFTWARE hive  loaded without any problem. 
    So is there some Windows 8/8.1 registry hives validation mechanism or additional security checks tha prevents load of registry hives fromother OS instance?

    Sorry for later response. I was busy with other tasks.
    The procMon tool shows RegLoadKey is failed when it tried to load the hive on Windows 8.1 (8.1 based WinPE also). On Windows 7, I didn't see the error (Shows Success instead of REGISTRY CORRUPT). Once the hive is load & unloaded on Windows 7 OS, the
    check sum of the hive is changed, and I can load the updated hive with regedit in Windows 8.1 OS.
    "reg.exe","752","RegCloseKey","HKLM\SOFTWARE\Microsoft\SQMClient\Windows","SUCCESS",""
    "reg.exe","752","RegQueryKey","HKLM","SUCCESS","Query: HandleTags, HandleTags: 0x0"
    "reg.exe","752","RegOpenKey","HKLM\Software\Microsoft\Rpc","SUCCESS","Desired Access: Query Value"
    "reg.exe","752","RegQueryValue","HKLM\SOFTWARE\Microsoft\Rpc\IdleTimerWindow","NAME NOT FOUND","Length: 144"
    "reg.exe","752","RegCloseKey","HKLM\SOFTWARE\Microsoft\Rpc","SUCCESS",""
    "reg.exe","752","QueryNameInformationFile","C:\Dhoni","SUCCESS","Name: \Dhoni"
    "reg.exe","752","RegQueryKeySecurity","HKLM","SUCCESS",""
    "reg.exe","752","RegLoadKey","HKLM\target1","REGISTRY CORRUPT","Hive Path: C:\Dhoni\SYSTEM1"

  • Can't open Illustrator file - "does not have the correct number of operands..."

    Right off the bat - Mac, 10.6.8, Illustrator CS4. I've seen different variations of this "...correct number of operands..." error on these boards, with fonts and other things as the culprit, but I haven't come across this specific issue yet. If someone has seen something along these lines please let me know.
    I work for a small design agency that uses Illustrator CS4 on a consistent basis. We have had an ongoing issue where random files that had been working fine will sporadically become corrupt, causing us to have to use the Document Recovery Mode and then edit the file in a text editor to fix the problem which does work for this issue flawlessly, other than the headache of having to do this repeatedly.
    Our workflow is such that we use a local area storage RAID which everyone uses to access the same client files. These client files are all in a series of nested folders we employ to keep all our separate client's work separate. The error arises when certain images are placed into a document, then the document is saved and closed. Upon trying to reopen the file we get an error message like this:
    Where the string "/CR0050_ret.psd" is the filename of the offending image. Basically what has happened is that in the actual code for the document, there is a string that starts as "%%IncludeFile....." which specifies the file's pathway on the local area storage we use. The last segment of that string is the image's filename, which has gotten returned to the next line down, and now the program cannot interpret that line in the code, so it cannot properly open the file. The file will open but only the content that is above the offending IncludeFile string is visible.
    (client information blurred out)
    This does not happen to all the "%%IncludeFile" strings in the document, and does not happen for all images on our local area storage. We know how to fix the problem through the Document Recovery mode and then editing the text. Once we return the filename portion of the code back up to the proper line, and save the file in the text editor, the file will open in Illustrator and show all the information contained within properly. However, if you save the file out again without making any changes, the same errors will propagate throughout the file's code again, and become unreadable prompting another document recovery. Insert endless cycle here.
    What we would like to try and find out is why is this happening, so we can prevent it from happening in the future. Is the image nested too far down in the hierarchy of folders? Are there too many characters in the file pathway string? What can we do to keep this from happening moving forward?
    I know that an easy answer is to copy the files over locally to our hard drives, and work from there, but this is not a very efficient way to work, as we are all collectively working on different files, utilizing common image resources. We like to keep one set of images so there is no duplication that may lead to the wrong image being sent out - "Oh you sent John's version? You should have sent Molly's version of the image for that file"
    We also downloaded the trial version of Illustrator CS6 to see if an upgrade would work wonders, but it does not, the problem persists when saving a recovered file in CS6.
    Any thoughts? Thanks in advance.
    ABK

    In Illustrator it is generally not a good idea to read and write directly to and from a server: http://helpx.adobe.com/illustrator/kb/illustrator-support-networks-removable-media.html
    If you really think you need to do it, then check your network thoroughly.

  • "error: command failed to execute correctly" on several packages

    Last night, when I updated before shutting down, I got a few errors, as in the subject. As it was very late, I thought I'd pick it up today.
    Unfortunately, the pacman log only lists one of the ones that failed as libgpg-error. The other one that I remember erroring was gawk. There were a few others, maybe four or five, but I couln't reliably recall them all, so won't guess.
    Here's a new attempt to reinstall gawk with --debug. I did the same with libgpg-error and the error occurred at the same place, with very similar output, so I think the issue is the same for all failures.
    debug: pacman v4.2.1 - libalpm v9.0.1
    debug: parseconfig: options pass
    debug: config: attempting to read file /etc/pacman.conf
    debug: config: finish section '(null)'
    debug: config: new section 'options'
    debug: config: HoldPkg: pacman
    debug: config: HoldPkg: glibc
    debug: config: usedelta (default 0.7)
    debug: config: arch: x86_64
    debug: config: verbosepkglists
    debug: config: chomp
    debug: config: SigLevel: Required
    debug: config: SigLevel: DatabaseOptional
    debug: config: SigLevel: TrustedOnly
    debug: config: LocalFileSigLevel: Optional
    debug: config: finish section 'options'
    debug: config: new section 'core'
    debug: config file /etc/pacman.conf, line 78: including /etc/pacman.d/mirrorlist
    debug: config: attempting to read file /etc/pacman.d/mirrorlist
    debug: config: finished parsing /etc/pacman.d/mirrorlist
    debug: config: finish section 'core'
    debug: config: new section 'extra'
    debug: config file /etc/pacman.conf, line 81: including /etc/pacman.d/mirrorlist
    debug: config: attempting to read file /etc/pacman.d/mirrorlist
    debug: config: finished parsing /etc/pacman.d/mirrorlist
    debug: config: finish section 'extra'
    debug: config: new section 'xyne-x86_64'
    debug: config: finish section 'xyne-x86_64'
    debug: config: new section 'community'
    debug: config file /etc/pacman.conf, line 91: including /etc/pacman.d/mirrorlist
    debug: config: attempting to read file /etc/pacman.d/mirrorlist
    debug: config: finished parsing /etc/pacman.d/mirrorlist
    debug: config: finish section 'community'
    debug: config: new section 'multilib'
    debug: config file /etc/pacman.conf, line 100: including /etc/pacman.d/mirrorlist
    debug: config: attempting to read file /etc/pacman.d/mirrorlist
    debug: config: finished parsing /etc/pacman.d/mirrorlist
    debug: config: finish section 'multilib'
    debug: config: new section 'infinality-bundle'
    debug: config: finish section 'infinality-bundle'
    debug: config: new section 'infinality-bundle-multilib'
    debug: config: finish section 'infinality-bundle-multilib'
    debug: config: new section 'infinality-bundle-fonts'
    debug: config: finish section 'infinality-bundle-fonts'
    debug: config: new section '(null)'
    debug: config: finished parsing /etc/pacman.conf
    debug: setup_libalpm called
    debug: option 'logfile' = /var/log/pacman.log
    debug: option 'gpgdir' = /etc/pacman.d/gnupg/
    debug: option 'cachedir' = /var/cache/pacman/pkg/
    debug: parseconfig: repo pass
    debug: config: attempting to read file /etc/pacman.conf
    debug: config: finish section '(null)'
    debug: config: new section 'options'
    debug: config: finish section 'options'
    debug: config: new section 'core'
    debug: config file /etc/pacman.conf, line 78: including /etc/pacman.d/mirrorlist
    debug: config: attempting to read file /etc/pacman.d/mirrorlist
    debug: config: finished parsing /etc/pacman.d/mirrorlist
    debug: config: finish section 'core'
    debug: registering sync database 'core'
    debug: database path for tree core set to /var/lib/pacman/sync/core.db
    debug: "/var/lib/pacman/sync/core.db.sig" is not readable: No such file or directory
    debug: sig path /var/lib/pacman/sync/core.db.sig could not be opened
    debug: missing optional signature
    debug: setting usage of 15 for core repoistory
    debug: adding new server URL to database 'core': http://arch.tamcore.eu/core/os/x86_64
    debug: adding new server URL to database 'core': http://mirror.one.com/archlinux/core/os/x86_64
    debug: adding new server URL to database 'core': http://mirror.gnomus.de/core/os/x86_64
    debug: adding new server URL to database 'core': http://mirror.js-webcoding.de/pub/archlinux/core/os/x86_64
    debug: adding new server URL to database 'core': http://archlinux.polymorf.fr/core/os/x86_64
    debug: config: new section 'extra'
    debug: config file /etc/pacman.conf, line 81: including /etc/pacman.d/mirrorlist
    debug: config: attempting to read file /etc/pacman.d/mirrorlist
    debug: config: finished parsing /etc/pacman.d/mirrorlist
    debug: config: finish section 'extra'
    debug: registering sync database 'extra'
    debug: database path for tree extra set to /var/lib/pacman/sync/extra.db
    debug: "/var/lib/pacman/sync/extra.db.sig" is not readable: No such file or directory
    debug: sig path /var/lib/pacman/sync/extra.db.sig could not be opened
    debug: missing optional signature
    debug: setting usage of 15 for extra repoistory
    debug: adding new server URL to database 'extra': http://arch.tamcore.eu/extra/os/x86_64
    debug: adding new server URL to database 'extra': http://mirror.one.com/archlinux/extra/os/x86_64
    debug: adding new server URL to database 'extra': http://mirror.gnomus.de/extra/os/x86_64
    debug: adding new server URL to database 'extra': http://mirror.js-webcoding.de/pub/archlinux/extra/os/x86_64
    debug: adding new server URL to database 'extra': http://archlinux.polymorf.fr/extra/os/x86_64
    debug: config: new section 'xyne-x86_64'
    debug: config: SigLevel: Required
    debug: config: finish section 'xyne-x86_64'
    debug: registering sync database 'xyne-x86_64'
    debug: database path for tree xyne-x86_64 set to /var/lib/pacman/sync/xyne-x86_64.db
    debug: GPGME version: 1.5.4
    debug: GPGME engine info: file=/usr/bin/gpg2, home=/etc/pacman.d/gnupg/
    debug: checking signature for /var/lib/pacman/sync/xyne-x86_64.db
    debug: 1 signatures returned
    debug: fingerprint: EC3CBE7F607D11E663149E811D1F0DC78F173680
    debug: summary: valid
    debug: summary: green
    debug: status: Success
    debug: timestamp: 1430676813
    debug: exp_timestamp: 0
    debug: validity: full; reason: Success
    debug: key: EC3CBE7F607D11E663149E811D1F0DC78F173680, Xyne. (key #3) <[email protected]>, owner_trust unknown, disabled 0
    debug: signature is valid
    debug: signature is fully trusted
    debug: setting usage of 15 for xyne-x86_64 repoistory
    debug: adding new server URL to database 'xyne-x86_64': http://xyne.archlinux.ca/repos/xyne
    debug: config: new section 'community'
    debug: config file /etc/pacman.conf, line 91: including /etc/pacman.d/mirrorlist
    debug: config: attempting to read file /etc/pacman.d/mirrorlist
    debug: config: finished parsing /etc/pacman.d/mirrorlist
    debug: config: finish section 'community'
    debug: registering sync database 'community'
    debug: database path for tree community set to /var/lib/pacman/sync/community.db
    debug: "/var/lib/pacman/sync/community.db.sig" is not readable: No such file or directory
    debug: sig path /var/lib/pacman/sync/community.db.sig could not be opened
    debug: missing optional signature
    debug: setting usage of 15 for community repoistory
    debug: adding new server URL to database 'community': http://arch.tamcore.eu/community/os/x86_64
    debug: adding new server URL to database 'community': http://mirror.one.com/archlinux/community/os/x86_64
    debug: adding new server URL to database 'community': http://mirror.gnomus.de/community/os/x86_64
    debug: adding new server URL to database 'community': http://mirror.js-webcoding.de/pub/archlinux/community/os/x86_64
    debug: adding new server URL to database 'community': http://archlinux.polymorf.fr/community/os/x86_64
    debug: config: new section 'multilib'
    debug: config file /etc/pacman.conf, line 100: including /etc/pacman.d/mirrorlist
    debug: config: attempting to read file /etc/pacman.d/mirrorlist
    debug: config: finished parsing /etc/pacman.d/mirrorlist
    debug: config: finish section 'multilib'
    debug: registering sync database 'multilib'
    debug: database path for tree multilib set to /var/lib/pacman/sync/multilib.db
    debug: "/var/lib/pacman/sync/multilib.db.sig" is not readable: No such file or directory
    debug: sig path /var/lib/pacman/sync/multilib.db.sig could not be opened
    debug: missing optional signature
    debug: setting usage of 15 for multilib repoistory
    debug: adding new server URL to database 'multilib': http://arch.tamcore.eu/multilib/os/x86_64
    debug: adding new server URL to database 'multilib': http://mirror.one.com/archlinux/multilib/os/x86_64
    debug: adding new server URL to database 'multilib': http://mirror.gnomus.de/multilib/os/x86_64
    debug: adding new server URL to database 'multilib': http://mirror.js-webcoding.de/pub/archlinux/multilib/os/x86_64
    debug: adding new server URL to database 'multilib': http://archlinux.polymorf.fr/multilib/os/x86_64
    debug: config: new section 'infinality-bundle'
    debug: config: finish section 'infinality-bundle'
    debug: registering sync database 'infinality-bundle'
    debug: database path for tree infinality-bundle set to /var/lib/pacman/sync/infinality-bundle.db
    debug: checking signature for /var/lib/pacman/sync/infinality-bundle.db
    debug: 1 signatures returned
    debug: fingerprint: A9244FB5E93F11F0E975337FAE6866C7962DDE58
    debug: summary: valid
    debug: summary: green
    debug: status: Success
    debug: timestamp: 1430276639
    debug: exp_timestamp: 0
    debug: validity: full; reason: Success
    debug: key: A9244FB5E93F11F0E975337FAE6866C7962DDE58, bohoomil (dev key) <[email protected]>, owner_trust unknown, disabled 0
    debug: signature is valid
    debug: signature is fully trusted
    debug: setting usage of 15 for infinality-bundle repoistory
    debug: adding new server URL to database 'infinality-bundle': http://bohoomil.com/repo/x86_64
    debug: config: new section 'infinality-bundle-multilib'
    debug: config: finish section 'infinality-bundle-multilib'
    debug: registering sync database 'infinality-bundle-multilib'
    debug: database path for tree infinality-bundle-multilib set to /var/lib/pacman/sync/infinality-bundle-multilib.db
    debug: checking signature for /var/lib/pacman/sync/infinality-bundle-multilib.db
    debug: 1 signatures returned
    debug: fingerprint: A9244FB5E93F11F0E975337FAE6866C7962DDE58
    debug: summary: valid
    debug: summary: green
    debug: status: Success
    debug: timestamp: 1430087321
    debug: exp_timestamp: 0
    debug: validity: full; reason: Success
    debug: key: A9244FB5E93F11F0E975337FAE6866C7962DDE58, bohoomil (dev key) <[email protected]>, owner_trust unknown, disabled 0
    debug: signature is valid
    debug: signature is fully trusted
    debug: setting usage of 15 for infinality-bundle-multilib repoistory
    debug: adding new server URL to database 'infinality-bundle-multilib': http://bohoomil.com/repo/multilib/x86_64
    debug: config: new section 'infinality-bundle-fonts'
    debug: config: finish section 'infinality-bundle-fonts'
    debug: registering sync database 'infinality-bundle-fonts'
    debug: database path for tree infinality-bundle-fonts set to /var/lib/pacman/sync/infinality-bundle-fonts.db
    debug: checking signature for /var/lib/pacman/sync/infinality-bundle-fonts.db
    debug: 1 signatures returned
    debug: fingerprint: A9244FB5E93F11F0E975337FAE6866C7962DDE58
    debug: summary: valid
    debug: summary: green
    debug: status: Success
    debug: timestamp: 1430276566
    debug: exp_timestamp: 0
    debug: validity: full; reason: Success
    debug: key: A9244FB5E93F11F0E975337FAE6866C7962DDE58, bohoomil (dev key) <[email protected]>, owner_trust unknown, disabled 0
    debug: signature is valid
    debug: signature is fully trusted
    debug: setting usage of 15 for infinality-bundle-fonts repoistory
    debug: adding new server URL to database 'infinality-bundle-fonts': http://bohoomil.com/repo/fonts
    debug: config: new section '(null)'
    debug: config: finished parsing /etc/pacman.conf
    debug: loading package cache for repository 'core'
    debug: opening archive /var/lib/pacman/sync/core.db
    debug: added 208 packages to package cache for db 'core'
    debug: adding package 'gawk'
    debug: loading package cache for repository 'local'
    debug: added 1122 packages to package cache for db 'local'
    warning: gawk-4.1.2-1 is up to date -- reinstalling
    debug: adding package gawk-4.1.2-1 to the transaction add list
    resolving dependencies...
    debug: resolving target's dependencies
    debug: started resolving dependencies
    debug: checkdeps: package gawk-4.1.2-1
    debug: finished resolving dependencies
    looking for conflicting packages...
    debug: looking for conflicts
    debug: check targets vs targets
    debug: check targets vs targets
    debug: check targets vs db and db vs targets
    debug: check targets vs db
    debug: check db vs targets
    debug: checking dependencies
    debug: checkdeps: package gawk-4.1.2-1
    debug: found cached pkg: /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: setting download size 0 for pkg gawk
    debug: sorting by dependencies
    debug: started sorting dependencies
    debug: sorting dependencies finished
    Package (1) Old Version New Version Net Change
    core/gawk 4.1.2-1 4.1.2-1 0.00 MiB
    Total Installed Size: 2.19 MiB
    Net Upgrade Size: 0.00 MiB
    :: Proceed with installation? [Y/n] y
    debug: using cachedir: /var/cache/pacman/pkg/
    debug: using cachedir: /var/cache/pacman/pkg/
    checking keyring...
    debug: looking up key 771DF6627EDF681F locally
    debug: key lookup success, key exists
    checking package integrity...
    debug: found cached pkg: /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: sig data: iQEcBAABCAAGBQJVQNc+AAoJEHcd9mJ+32gfQZgH/jkRiirmPTb4nE0xgcFGKc8wrxw3k9ooGyMFoeqAthTICB/5dBzNfEQ8b4X74gi8KiYQVYm4WE8kWIidUj5ekJhGwngO6Gk+lwyBq+Uh8rUHDJKw557fImM2bBah2lxNUxqZzxYTA1FByq2lptLB5EPJgAPemyUXACMXITDfqtWMpuHIEPLZi5WW9+cB0eMKz5IeEEfZi4lO2fyfRqxNkRDNSmC5NEDkfhm+XVXBEd4gugSOmYpKzlA67mjw2HP+oOyNheL8st4SjgFr/qVDdbfiBbaTTujC4mF1n73z5qp4K5/xgHqk42ftoo003XFQYVOAg3bDWMvUF5d63D4+HKg=
    debug: checking signature for /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: 1 signatures returned
    debug: fingerprint: 5B7E3FB71B7F10329A1C03AB771DF6627EDF681F
    debug: summary: valid
    debug: summary: green
    debug: status: Success
    debug: timestamp: 1430312766
    debug: exp_timestamp: 0
    debug: validity: full; reason: Success
    debug: key: 5B7E3FB71B7F10329A1C03AB771DF6627EDF681F, Tobias Powalowski <[email protected]>, owner_trust unknown, disabled 0
    debug: signature is valid
    debug: signature is fully trusted
    loading package files...
    debug: found cached pkg: /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: replacing pkgcache entry with package file for target gawk
    debug: opening archive /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: starting package load for /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: found mtree for package /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz, getting file list
    debug: finished mtree reading for /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: sorting package filelist for /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    checking for file conflicts...
    debug: looking for file conflicts
    debug: searching for file conflicts: gawk
    debug: searching for filesystem conflicts: gawk
    checking available disk space...
    debug: checking available disk space
    debug: discovered mountpoint: /tmp
    debug: discovered mountpoint: /sys/kernel/security
    debug: discovered mountpoint: /sys/kernel/debug
    debug: discovered mountpoint: /sys/kernel/config
    debug: discovered mountpoint: /sys/fs/pstore
    debug: discovered mountpoint: /sys/fs/cgroup/systemd
    debug: discovered mountpoint: /sys/fs/cgroup/net_cls
    debug: discovered mountpoint: /sys/fs/cgroup/memory
    debug: discovered mountpoint: /sys/fs/cgroup/freezer
    debug: discovered mountpoint: /sys/fs/cgroup/devices
    debug: discovered mountpoint: /sys/fs/cgroup/cpuset
    debug: discovered mountpoint: /sys/fs/cgroup/cpu,cpuacct
    debug: discovered mountpoint: /sys/fs/cgroup/blkio
    debug: discovered mountpoint: /sys/fs/cgroup
    debug: discovered mountpoint: /sys
    debug: discovered mountpoint: /run/user/1000
    debug: discovered mountpoint: /run
    debug: discovered mountpoint: /proc/sys/fs/binfmt_misc
    debug: discovered mountpoint: /proc
    debug: discovered mountpoint: /home/skanky/personal
    debug: discovered mountpoint: /home
    debug: discovered mountpoint: /dev/shm
    debug: discovered mountpoint: /dev/pts
    debug: discovered mountpoint: /dev/mqueue
    debug: discovered mountpoint: /dev/hugepages
    debug: discovered mountpoint: /dev
    debug: discovered mountpoint: /
    debug: loading fsinfo for /
    debug: partition /, needed 0, cushion 5121, free 1174711
    debug: installing packages
    reinstalling gawk...
    debug: reinstalling package gawk-4.1.2-1
    debug: opening archive /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: extracting: .INSTALL
    debug: removing old package first (gawk-4.1.2-1)
    debug: removing 110 files
    debug: unlinking /usr/share/man/man3/time.3am.gz
    debug: unlinking /usr/share/man/man3/rwarray.3am.gz
    debug: unlinking /usr/share/man/man3/revtwoway.3am.gz
    debug: unlinking /usr/share/man/man3/revoutput.3am.gz
    debug: unlinking /usr/share/man/man3/readfile.3am.gz
    debug: unlinking /usr/share/man/man3/readdir.3am.gz
    debug: unlinking /usr/share/man/man3/ordchr.3am.gz
    debug: unlinking /usr/share/man/man3/inplace.3am.gz
    debug: unlinking /usr/share/man/man3/fork.3am.gz
    debug: unlinking /usr/share/man/man3/fnmatch.3am.gz
    debug: unlinking /usr/share/man/man3/filefuncs.3am.gz
    debug: keeping directory /usr/share/man/man3/ (contains files)
    debug: unlinking /usr/share/man/man1/igawk.1.gz
    debug: unlinking /usr/share/man/man1/gawk.1.gz
    debug: keeping directory /usr/share/man/man1/ (contains files)
    debug: keeping directory /usr/share/man/ (contains files)
    debug: unlinking /usr/share/locale/vi/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/vi/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/vi/ (contains files)
    debug: unlinking /usr/share/locale/sv/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/sv/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/sv/ (contains files)
    debug: unlinking /usr/share/locale/pl/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/pl/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/pl/ (contains files)
    debug: unlinking /usr/share/locale/nl/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/nl/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/nl/ (contains files)
    debug: unlinking /usr/share/locale/ms/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/ms/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/ms/ (contains files)
    debug: unlinking /usr/share/locale/ja/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/ja/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/ja/ (contains files)
    debug: unlinking /usr/share/locale/it/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/it/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/it/ (contains files)
    debug: unlinking /usr/share/locale/fr/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/fr/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/fr/ (contains files)
    debug: unlinking /usr/share/locale/fi/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/fi/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/fi/ (contains files)
    debug: unlinking /usr/share/locale/es/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/es/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/es/ (contains files)
    debug: unlinking /usr/share/locale/de/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/de/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/de/ (contains files)
    debug: unlinking /usr/share/locale/da/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/da/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/da/ (contains files)
    debug: unlinking /usr/share/locale/ca/LC_MESSAGES/gawk.mo
    debug: keeping directory /usr/share/locale/ca/LC_MESSAGES/ (contains files)
    debug: keeping directory /usr/share/locale/ca/ (contains files)
    debug: keeping directory /usr/share/locale/ (contains files)
    debug: unlinking /usr/share/info/gawkinet.info.gz
    debug: unlinking /usr/share/info/gawk.info.gz
    debug: keeping directory /usr/share/info/ (contains files)
    debug: unlinking /usr/share/awk/zerofile.awk
    debug: unlinking /usr/share/awk/walkarray.awk
    debug: unlinking /usr/share/awk/strtonum.awk
    debug: unlinking /usr/share/awk/shellquote.awk
    debug: unlinking /usr/share/awk/round.awk
    debug: unlinking /usr/share/awk/rewind.awk
    debug: unlinking /usr/share/awk/readfile.awk
    debug: unlinking /usr/share/awk/readable.awk
    debug: unlinking /usr/share/awk/quicksort.awk
    debug: unlinking /usr/share/awk/processarray.awk
    debug: unlinking /usr/share/awk/passwd.awk
    debug: unlinking /usr/share/awk/ord.awk
    debug: unlinking /usr/share/awk/noassign.awk
    debug: unlinking /usr/share/awk/libintl.awk
    debug: unlinking /usr/share/awk/join.awk
    debug: unlinking /usr/share/awk/inplace.awk
    debug: unlinking /usr/share/awk/group.awk
    debug: unlinking /usr/share/awk/gettime.awk
    debug: unlinking /usr/share/awk/getopt.awk
    debug: unlinking /usr/share/awk/ftrans.awk
    debug: unlinking /usr/share/awk/ctime.awk
    debug: unlinking /usr/share/awk/cliff_rand.awk
    debug: unlinking /usr/share/awk/bits2str.awk
    debug: unlinking /usr/share/awk/assert.awk
    debug: keeping directory /usr/share/awk/ (in new package)
    debug: keeping directory /usr/share/ (contains files)
    debug: unlinking /usr/lib/gawk/time.so
    debug: unlinking /usr/lib/gawk/testext.so
    debug: unlinking /usr/lib/gawk/rwarray.so
    debug: unlinking /usr/lib/gawk/revtwoway.so
    debug: unlinking /usr/lib/gawk/revoutput.so
    debug: unlinking /usr/lib/gawk/readfile.so
    debug: unlinking /usr/lib/gawk/readdir.so
    debug: unlinking /usr/lib/gawk/ordchr.so
    debug: unlinking /usr/lib/gawk/inplace.so
    debug: unlinking /usr/lib/gawk/fork.so
    debug: unlinking /usr/lib/gawk/fnmatch.so
    debug: unlinking /usr/lib/gawk/filefuncs.so
    debug: keeping directory /usr/lib/gawk/ (in new package)
    debug: unlinking /usr/lib/awk/pwcat
    debug: unlinking /usr/lib/awk/grcat
    debug: keeping directory /usr/lib/awk/ (in new package)
    debug: keeping directory /usr/lib/ (contains files)
    debug: unlinking /usr/include/gawkapi.h
    debug: keeping directory /usr/include/ (contains files)
    debug: unlinking /usr/bin/igawk
    debug: unlinking /usr/bin/gawk-4.1.2
    debug: unlinking /usr/bin/gawk
    debug: unlinking /usr/bin/awk
    debug: keeping directory /usr/bin/ (contains files)
    debug: keeping directory /usr/ (contains files)
    debug: removing database entry 'gawk'
    debug: removing entry 'gawk' from 'local' cache
    debug: extracting files
    debug: opening archive /var/cache/pacman/pkg/gawk-4.1.2-1-x86_64.pkg.tar.xz
    debug: skipping extraction of '.PKGINFO'
    debug: extracting /var/lib/pacman/local/gawk-4.1.2-1/install
    debug: extracting /var/lib/pacman/local/gawk-4.1.2-1/mtree
    debug: extract: skipping dir extraction of /usr/
    debug: extract: skipping dir extraction of /usr/lib/
    debug: extract: skipping dir extraction of /usr/share/
    debug: extract: skipping dir extraction of /usr/include/
    debug: extract: skipping dir extraction of /usr/bin/
    debug: extracting /usr/bin/igawk
    debug: extracting /usr/bin/awk
    debug: extracting /usr/bin/gawk-4.1.2
    debug: extracting /usr/bin/gawk
    debug: extracting /usr/include/gawkapi.h
    debug: extract: skipping dir extraction of /usr/share/locale/
    debug: extract: skipping dir extraction of /usr/share/awk/
    debug: extract: skipping dir extraction of /usr/share/info/
    debug: extract: skipping dir extraction of /usr/share/man/
    debug: extract: skipping dir extraction of /usr/share/man/man3/
    debug: extract: skipping dir extraction of /usr/share/man/man1/
    debug: extracting /usr/share/man/man1/gawk.1.gz
    debug: extracting /usr/share/man/man1/igawk.1.gz
    debug: extracting /usr/share/man/man3/filefuncs.3am.gz
    debug: extracting /usr/share/man/man3/fnmatch.3am.gz
    debug: extracting /usr/share/man/man3/fork.3am.gz
    debug: extracting /usr/share/man/man3/inplace.3am.gz
    debug: extracting /usr/share/man/man3/ordchr.3am.gz
    debug: extracting /usr/share/man/man3/readdir.3am.gz
    debug: extracting /usr/share/man/man3/readfile.3am.gz
    debug: extracting /usr/share/man/man3/revoutput.3am.gz
    debug: extracting /usr/share/man/man3/revtwoway.3am.gz
    debug: extracting /usr/share/man/man3/rwarray.3am.gz
    debug: extracting /usr/share/man/man3/time.3am.gz
    debug: extracting /usr/share/info/gawk.info.gz
    debug: extracting /usr/share/info/gawkinet.info.gz
    debug: extracting /usr/share/awk/zerofile.awk
    debug: extracting /usr/share/awk/walkarray.awk
    debug: extracting /usr/share/awk/strtonum.awk
    debug: extracting /usr/share/awk/shellquote.awk
    debug: extracting /usr/share/awk/round.awk
    debug: extracting /usr/share/awk/rewind.awk
    debug: extracting /usr/share/awk/readfile.awk
    debug: extracting /usr/share/awk/readable.awk
    debug: extracting /usr/share/awk/quicksort.awk
    debug: extracting /usr/share/awk/processarray.awk
    debug: extracting /usr/share/awk/ord.awk
    debug: extracting /usr/share/awk/noassign.awk
    debug: extracting /usr/share/awk/libintl.awk
    debug: extracting /usr/share/awk/join.awk
    debug: extracting /usr/share/awk/inplace.awk
    debug: extracting /usr/share/awk/gettime.awk
    debug: extracting /usr/share/awk/getopt.awk
    debug: extracting /usr/share/awk/ftrans.awk
    debug: extracting /usr/share/awk/ctime.awk
    debug: extracting /usr/share/awk/cliff_rand.awk
    debug: extracting /usr/share/awk/bits2str.awk
    debug: extracting /usr/share/awk/assert.awk
    debug: extracting /usr/share/awk/group.awk
    debug: extracting /usr/share/awk/passwd.awk
    debug: extract: skipping dir extraction of /usr/share/locale/vi/
    debug: extract: skipping dir extraction of /usr/share/locale/sv/
    debug: extract: skipping dir extraction of /usr/share/locale/pl/
    debug: extract: skipping dir extraction of /usr/share/locale/nl/
    debug: extract: skipping dir extraction of /usr/share/locale/ms/
    debug: extract: skipping dir extraction of /usr/share/locale/ja/
    debug: extract: skipping dir extraction of /usr/share/locale/it/
    debug: extract: skipping dir extraction of /usr/share/locale/fr/
    debug: extract: skipping dir extraction of /usr/share/locale/fi/
    debug: extract: skipping dir extraction of /usr/share/locale/es/
    debug: extract: skipping dir extraction of /usr/share/locale/de/
    debug: extract: skipping dir extraction of /usr/share/locale/da/
    debug: extract: skipping dir extraction of /usr/share/locale/ca/
    debug: extract: skipping dir extraction of /usr/share/locale/ca/LC_MESSAGES/
    debug: extracting /usr/share/locale/ca/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/da/LC_MESSAGES/
    debug: extracting /usr/share/locale/da/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/de/LC_MESSAGES/
    debug: extracting /usr/share/locale/de/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/es/LC_MESSAGES/
    debug: extracting /usr/share/locale/es/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/fi/LC_MESSAGES/
    debug: extracting /usr/share/locale/fi/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/fr/LC_MESSAGES/
    debug: extracting /usr/share/locale/fr/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/it/LC_MESSAGES/
    debug: extracting /usr/share/locale/it/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/ja/LC_MESSAGES/
    debug: extracting /usr/share/locale/ja/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/ms/LC_MESSAGES/
    debug: extracting /usr/share/locale/ms/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/nl/LC_MESSAGES/
    debug: extracting /usr/share/locale/nl/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/pl/LC_MESSAGES/
    debug: extracting /usr/share/locale/pl/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/sv/LC_MESSAGES/
    debug: extracting /usr/share/locale/sv/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/share/locale/vi/LC_MESSAGES/
    debug: extracting /usr/share/locale/vi/LC_MESSAGES/gawk.mo
    debug: extract: skipping dir extraction of /usr/lib/gawk/
    debug: extract: skipping dir extraction of /usr/lib/awk/
    debug: extracting /usr/lib/awk/pwcat
    debug: extracting /usr/lib/awk/grcat
    debug: extracting /usr/lib/gawk/filefuncs.so
    debug: extracting /usr/lib/gawk/fnmatch.so
    debug: extracting /usr/lib/gawk/fork.so
    debug: extracting /usr/lib/gawk/inplace.so
    debug: extracting /usr/lib/gawk/ordchr.so
    debug: extracting /usr/lib/gawk/readdir.so
    debug: extracting /usr/lib/gawk/readfile.so
    debug: extracting /usr/lib/gawk/revoutput.so
    debug: extracting /usr/lib/gawk/revtwoway.so
    debug: extracting /usr/lib/gawk/rwarray.so
    debug: extracting /usr/lib/gawk/testext.so
    debug: extracting /usr/lib/gawk/time.so
    debug: updating database
    debug: adding database entry 'gawk'
    debug: writing gawk-4.1.2-1 DESC information back to db
    debug: writing gawk-4.1.2-1 FILES information back to db
    debug: adding entry 'gawk' in 'local' cache
    debug: executing ". /tmp/alpm_r21DA5/.INSTALL; post_upgrade 4.1.2-1 4.1.2-1"
    debug: executing "/usr/bin/bash" under chroot "/"
    debug: call to waitpid succeeded
    error: command failed to execute correctly
    debug: running ldconfig
    debug: executing "/usr/bin/ldconfig" under chroot "/"
    debug: call to waitpid succeeded
    debug: unregistering database 'local'
    debug: freeing package cache for repository 'local'
    debug: unregistering database 'core'
    debug: freeing package cache for repository 'core'
    debug: unregistering database 'extra'
    debug: unregistering database 'xyne-x86_64'
    debug: unregistering database 'community'
    debug: unregistering database 'multilib'
    debug: unregistering database 'infinality-bundle'
    debug: unregistering database 'infinality-bundle-multilib'
    debug: unregistering database 'infinality-bundle-fonts'
    pacman thinks the upgrade/reinstall was successful in that the latest version is installed.
    I did a search on the forums and the only other issue that I thought was connected might be microcode not up to date, but I had followed the update instructions some time back and as far as I can tell, the microcode  is up to date.
    I have two main questions:
    1) How do I work out what's causing the error, from above?
    2) Is there a way I can work out which packages gave the error, so I can make sure they're installed properly?
    Thanks.

    The following packages also had problems
    ( 2/17) upgrading glibc
    error: command failed to execute correctly
    ( 3/17) upgrading binutils
    error: command failed to execute correctly
    ( 4/17) upgrading coreutils
    error: command failed to execute correctly
    ( 8/17) upgrading gcc
    error: command failed to execute correctly
    ( 9/17) upgrading gcc-fortran
    error: command failed to execute correctly
    (10/17) upgrading gcc-libs
    error: command failed to execute correctly
    Does anybody have a clue?
    Thanks,

Maybe you are looking for