Data activation in ODS taking too long

Hi gurus,
I have loaded data into PSA. From there I have loaded this data into ODS. However, the data is not being activated. So, I have to do manual activation on it.
The problem is, I have to wait for a very long time. I started the data activation about 1 hour ago. My data records have 1.5 million lines.
In SM37 the log shows the following messages:
05.02.2007 12:13:37 Job started                                                                       00           516          S    
05.02.2007 12:13:37 Step 001 started (program RSODSACT1, variant &0000000000061, user ID XXXXX)      00           550          S    
05.02.2007 12:13:49 Activation is running: Data target ZFIGL_O2, from 1,553 to 1,553                  RSM          744          S    
05.02.2007 12:13:49 Data to be activated successfully checked against archiving objects              RSMPC         153          S    
05.02.2007 12:13:49 SQL: 05.02.2007 12:13:49 S12940                                                  DBMAN         099          I    
05.02.2007 12:13:49 ANALYZE TABLE "/BIC/AZFIGL_O240" DELETE                                          DBMAN         099          I    
05.02.2007 12:13:49 STATISTICS                                                                       DBMAN         099          I    
05.02.2007 12:13:49 SQL-END: 05.02.2007 12:13:49 00:00:00                                            DBMAN         099          I    
05.02.2007 12:13:49 SQL: 05.02.2007 12:13:49 S12940                                                  DBMAN         099          I    
05.02.2007 12:13:49 BEGIN DBMS_STATS.GATHER_TABLE_STATS ( OWNNAME =>                                 DBMAN         099          I    
05.02.2007 12:13:49 'SAPBWP', TABNAME => '"/BIC/AZFIGL_O240"',                                       DBMAN         099          I    
05.02.2007 12:13:49 ESTIMATE_PERCENT => 1 , METHOD_OPT => 'FOR ALL                                   DBMAN         099          I    
05.02.2007 12:13:49 INDEXED COLUMNS SIZE 75', DEGREE => 1 ,                                          DBMAN         099          I    
05.02.2007 12:13:49 GRANULARITY => 'ALL', CASCADE => TRUE ); END;                                    DBMAN         099          I    
05.02.2007 12:13:56 SQL-END: 05.02.2007 12:13:56 00:00:07                                            DBMAN         099          I    
It stuck there at SQL-END for some time...
Is this normal? How can I improve the performance for my data loading & activation?
Thank you very much.

Hi Ken.
Many thanks for  your input. I think I will follow what have been suggested in the note as follow.
+     4.      Activation of data in an ODS object
To improve system performance when activating data in the ODS object, you can make the following entries in Customizing under Business Information Warehouse ® General BW Settings ® ODS Object Settings:
the maximum number of parallel processes when activating data in the ODS object as when moving SIDs
the minimum number of data records for each data package when activating data in the ODS object, meaning you define the size of the data packages that are activated
the maximum wait time in seconds when activating data in the ODS object. This is the time when the main process (batch process) waits for the dialog process that is split before it classifies it as having failed.+
However, can someone advises  me what would be an optimum / normal value for
1. the maximum number of parallel processes
2. the minimum number of data records for each data package
3.the maximum wait time in seconds?
Many thanks.

Similar Messages

  • Data Archive Script is taking too long to delete a large table

    Hi All,
    We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
    CREATE TABLE "APP"."MON_TXNS"
       (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
        "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_PAYER" NUMBER(12,0),
        "ID_PAYER_PI" NUMBER(12,0),
        "ID_PAYEE" NUMBER(12,0),
        "ID_PAYEE_PI" NUMBER(12,0),
        "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
        "STR_TEXT" VARCHAR2(60 CHAR),
        "DAT_MERCHANT_TIMESTAMP" DATE,
        "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
        "DAT_EXPIRATION" DATE,
        "DAT_CREATION" DATE,
        "STR_USER_CREATION" VARCHAR2(30 CHAR),
        "DAT_LAST_UPDATE" DATE,
        "STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
        "STR_OTP" CHAR(6 BYTE),
        "ID_AUTH_METHOD_PAYER" NUMBER(1,0),
        "AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
        "BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
        "ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
         CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
         CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX"  ENABLE,
         CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
          REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
          REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
         CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
          REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
         CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
      CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
      CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
    Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
    SQL> explain plan for
      2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2798378986
    | Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   1 |  DELETE                | MON_TXNS   |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    Please help,
    thanks,
    Banka Ravi

    'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
    Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
    The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.

  • Activation of ODS taking very long time

    Hello friends,
         I have a problem with activating ODS data . Data has been loaded into ODS , but Activation process is taking more than 1 week . This problem is since last week. We are using BW 3.5 version with SP level 18.
      When I right click ODS and chose delete data , then It is saying a message 'Lock in the lock manager could not be set'  Message no: RSENQ009.
    Could you please help me with this.
    Any advice greatly appreciated.
    Thanks

    Hi Manoj,
                  )Data Activation can be processed in parallel and distributed to server groups in the BW customizing (transaction RSCUSTA2) in BW 3.x. There are parameters for the maximum number of parallel activation dialog processes, minimum number of records per package, maximum wait time in seconds for ODS activation (if there is no response of the scheduled parallel activation processes within this time, the overall status is red) and a server group for the RFC calls when activating ODS data.
    2)Do not set the “BEx Reporting” flag if you do not plan to Report on ODS objects in BEx or in Web. If your reporting requirements on ODS objects are very restricted (e.g., display only very few, selective records), use InfoSets on top of ODS objects and disable the “BEx Reporting” flag.
    In BW 3.x, the SIDs are determined during activation, in BW 2.x during data load. ,hence increasing the Activation time
    Also refer the below notes
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_whm/~form/handler
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_whm/~form/handler
    Hope these helps
    Regards
    Karthik
    Assign points if helpful

  • Getting data from table BSEG taking too long ... any solutions.

    Hello people I am currently trying to get data from table BSEG for one particular G/L Account Number With restrictions using For All Entries.
    The problem is that even with such tight restrictions its causing my report program to run way too slow. I put an option where you dont have to access table bseg. And it runs just fine. (all of this is done during PRD Server).
    My question is
    1.) How come BSEG seems to make the report slow, even though I put some tight restrictions. <b>Im using For All Entries where Zuonr eq i_tab-zuonr</b>it seems to work fine in DEV and <b>hkont EQ '0020103101'</b>(Customer Deposits).
    2.) Is there a way for me to do the same thing as what I mentioned in #1 but only much faster.
    Thanks guys and take care

    Hi
    It should be better you don't read BSEG table if you haven't the keys BUKRS and BELNR, because the reading could take many times if there are many hits.
    If you want to find out the records of G/L Account it's better read the index table BSIS (for open items) and BSAS (for cleared items), here the field HKONT is a key (and ZUONR too). So you can improve the performance:
    DATA: T_ITEMS LIKE STANDARD TABLE OF BSIS.
    SELECT * FROM BSAS INTO TABLE T_ITEMS
      FOR ALL ENTRIES I_ITAB WHERE BUKRS = <BUKRS>
                               AND HKONT = '0020103101'
                               AND ZUONR = I_ITAB-ZUONR.
    SELECT * FROM BSIS APPENDING TABLE T_ITEMS
      FOR ALL ENTRIES I_ITAB WHERE BUKRS = <BUKRS>
                               AND HKONT = '0020103101'
                               AND ZUONR = I_ITAB-ZUONR.
    Remember every kind of item has an own index tables:
    - BSIS/BSAS for G/L Account
    - BSIK/BSAK for Vendor
    - BSID/BSAD for Customer
    These table have the same informations you can find out from BSEG and BKPF.
    Max

  • Select Data from BSIS table taking too long

    Hi
    I have to develop a report to give the  details of Extended Withholding Tax (EWT)  for a list of Expenses GL.
    Each expense GL is  linked to another Gl which is the EWT Tax GL Account. This is maintained in a Ztable.
    I havae wirtten the following code. It takes a lot of time to extract a data.
    This give me the GL i require
    SELECT * FROM ZSECCO_GL_EWT INTO CORRESPONDING FIELDS OF TABLE IT_GL
     WHERE BUKRS    = P_BUKRS
     AND   WT_QSCOD = P_QSCOD.
    Then I select  only the distinct documents No, from BSIS table  fro the hkonts  in the above internal table
    SELECT DISTINCT BUKRS GJAHR BELNR  FROM BSIS INTO CORRESPONDING FIELDS OF
    TABLE IT_BSIS_GL
     FOR ALL ENTRIES IN IT_GL
     WHERE BUKRS = P_BUKRS AND  HKONT = IT_GL-HKONT  AND GJAHR = P_GJAHR.
    Here I   once again select the   document details based on the document No. from above internal table
    This query takes a lot of time
    SELECT * FROM BSIS INTO CORRESPONDING FIELDS OF TABLE IT_BSIS
     FOR ALL ENTRIES IN IT_BSIS_GL
     WHERE  BUKRS = P_BUKRS AND GJAHR = P_GJAHR
        AND BELNR = IT_BSIS_GL-BELNR.
    Please Help

    Hi,
    Check note 992803; it could be that there is insufficient or missing index for BSIS table.
    Regards,
    Eli

  • I was backing up my iphone by changing the location of library beacause i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up

    I was backing up my iphone by changing the location of library because i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up.
    Also tell me about the performance of iphone 4 with ios 7.1.1...........
    T0X1C

    rabidrabbit wrote:
    Can I back up my iPhone 4S to my ipad 3 (64 gb)?
    no
    rabidrabbit wrote:
    However, now I don't have enough space in iCloud to backup either device. Why not?
    iCloud only give so much space for free storage, then if you exceed the limit of 5gb you have to pay for additional storage.

  • 0MAT_PLANT activation taking too long

    Hi
    I added a few attributes to 0MAT_PLANT. They are all display attributes except one (0MATERIAL). While activating 0MAT_PLANT, its taking a long time and I see that the X table is being filled (SID creation). What happens after the X table has been filled and also how can I know how many records to be inserted into the X table? I dont expect it to be the same as the amount of records in ECC when I do an RSA3.

    Hello Jimi,
    Adding attributes to a master data infoobject is an expensive operation in terms of DB resources and can take some time depending on the number of records in the master data table. If you check the no of records in the P table for 0mat_plant (you can check this in the defintion of the table in RSD1 transaction) the no of records added to the X table should be the same. If you have a Q table existing (depends if you have both time Independent and time Dependent attributes) then a Y table will also need to be filled and this should have the same no of records as the Q table.
    The following might help to improve the activation performance:
    Before adding the attributes check that that the master data object is consistent using the report described in the SAP
    note 1030224.
    Please check that you have configured the parameter mentioned in the note 536223 (it is release independent)
    Best Regards,
    Des

  • R/3 Extraction taking too long to load data into BW

    HI There,
    I'm trying to extract SAP Standard extractor 0FI_AP_4 into BW, and its taking endless time.
    Even the Extract checker RSA3  is taking too long to execute the data. Dont know why its taking so long to execute.
    Since there in not much data to take such a long time.
    Enhanced the datasource with three fields from BSEG using user exits.
    Is that the reason why its taking too long? Does User Exit slows down the extraction process?
    What measures i should take to quicken the process?
    Thanks for your time
    Vandana

    Thanks for all you replies.
    Please go through the steps I've gone through :
    - Installed the Business Content and its in version 3.5
    - Changed the update rules, Transfer rules and migrated the datasource to BI 7
    - Enhanced the 0FI_AP_3 to include three fields BSEG table
    - Ran RSA3 and the new fields are showing but the loading is quite slow.
    - Commented the code and ran RSA3 and with little difference data is showing up
    - Removed the comments and ran, its fine, though it takes little more time then previous step...but data is showing up
    - Replicated the datasource into BW
    - Created the info package and started the init process (before this deleted the previous stored init process)
    - Data isn't loading and please see the error message below.
    Diagnosis
    The data request was a full update.  In this case, the corresponding table in the source system does not
    contain any data. System Response Info IDoc received with status 8. Procedure Check the data basis in the source system.
    - Checked the transformation between datasource 0FI_AP_4 and Infosource ZFI_AP_4
       and I DID NOT found the three fields which i enhanced from BSEG table in the 0FI_AP_4 datasource.
    - Replicated the datasource 0FI_AP_4 again, but no change.
    Now...I dont know whats happening here.
    When i check the datasource 0FI_AP_4 in RSA6, i can see the three new fields from BSEG.
    When i check RSA3, i can see the data getting populated with the three new fields from BSEG,
    When i check the fields in the datasource 0FI_AP_4 in BW, I can see the three new fields. It shows
    that the connection between BW and R/3 is fine, isn't it?
    Now...Can anyone please suggest me how to go forward from here?
    Thanks for your time
    Vandana

  • SQL Statement taking too long to get the data

    Hi,
    There are over 2500 records in a table and when retrieve all using ' SELECT * From Table' it is taking too long to get the data. ie .. 4.3 secs.
    Is there any possible way to shorten the process time.
    Thanks

    Hi Patrick,
    Here is the sql statement and table desc.
    ID     Number
    SN     Varchar2(12)
    FN     Varchar2(30)
    LN     Varchar2(30)
    By     Varchar(255)
    Dt     Date(7)
    Add     Varchar2(50)
    Add1     Varchar2(30)
    Cty     Varchar2(30)
    Stt     Varchar2(2)
    Zip     Varchar2(12)
    Ph     Varchar2(15)
    Email     Varchar2(30)
    ORgId     Number
    Act     Varchar2(3)     
    select A."FN" || '' '' || A."LN" || '' ('' || A."SN" || '')'' "Name",
    A."By", A."Dt",
    A."Add" || ''
    '' || A."Cty" || '', '' || A."Stt" || '' '' || A."Zip" "Location",
    A."Ph", A."Email", A."ORgId", A."ID",
    A."SN" "OSN", A."Act"
    from "TBL_OPTRS" A where A."ID" <> 0 ';
    I'm displaying all rows in a report.
    if I use 'select * from TBL_OPTRS' , this also takes 4.3 to 4.6 secs.
    Thanks.

  • Data manager jobs taking too long or hanging

    Hoping someone here can provide some assistance with regard to the 4.2 version. We are specifically using BPC/OutlookSoft 4.2SP4 (and in process of upgrading to BPC7.5). Three server environment - SQL, OLAP and Web.
    Problem: Data manager jobs in each application of production appset with five applications are either taking too long to complete for very small jobs (single entity/single period data copy/clear, under 1000 records) or completely hanging for larger jobs. This has been an issue for the last 7 days. During normal operation, small DM jobs ran in under a minute and large ones taking only a few minutes.
    Failed attempts at resolution thus far:
    1. Processed all applications from the OLAP server
    2. Confirmed issue is specific to our appset and is not present in ApShell
    3. Copied packages from ApShell to application to eliminate package corruption
    4. Windows security updates were applied to all three servers but I assume this would also impact ApShell.
    5. Cleared tblDTSLog history
    6. Rebooted all three servers
    7. Suspected antivirus however, problem persists with antivirus disabled on all three servers.
    Other Observations
    There are several tables in the SQL database named k2import# and several stored procedures named DMU_k2import#. My guess is these did not get removed because I killed the hung up jobs. I'm not sure if their existence is causing any issues.
    To make the long story short, how can I narrow down at which point the jobs are hanging up or what is taking the longest time? I have turned on Debug Script but I don't' have documentation to make sense of all this info. What exactly is happening when I run a Clear package?  At this point, my next step is to run SQL Profiler to get a look into what is going on behind the scenes on the sql server. I also want to rule out the COM+ objects on the web server but not sure where to start.
    Any help is greatly appreciated!!
    Thank you,
    Hitesh

    Hi ,
    The problem seems to be related to database. Do you have any maintenance plan for database?
    It is specific for your appset because each appset has own database.
    I suspect you have to run an sp_updatestats (Update Statistics) for your database and I think the issue with your jobs hang will be solved.
    DMU_K2_importXXX  table are coming from hang imports ...you can delete these tables because it is just growing the size of database and for sure are not used anymore.
    Regards
    Sorin Radulescu

  • Browser times out when trying to view my website - says the server is taking too long. And no, I don't have a firewall.

    I can't view my website at www.artisancandies.com, even though it's working and everyone else seems to see it. No, I don't have a firewall, and it's not because of my internet provider - I have AT&T at work, and Comcast at home. My husband can see the site on his laptop. I tried dumping my cache in both Firefox and Safari, but it didn't work. I looked at it through proxify.com, and can see it that way, so I know it works. This is so frustrating, because I used to only see it when I typed in artisancandies.com - it would never work for me if I typed in www.artisancandies.com. Now it doesn't work at all. This is the message I get in Firefox:
    "The connection has timed out. The server at www.artisancandies.com is taking too long to respond."
    Please help!!!
    Kristen Scott

    Linc, here's what I've got from what you asked me to do. I hope you don't mind, but it was simple enough to leave everything in, so you could see the progression:
    Kristen-Scotts-Computer:~ kristenscott$ kextstat -kl | awk ' !/apple/ { print $6 $7 } '
    Kristen-Scotts-Computer:~ kristenscott$ sudo launchctl list | sed 1d | awk ' !/0x|apple|com\.vix|edu\.|org\./ { print $3 } '
    WARNING: Improper use of the sudo command could lead to data loss
    or the deletion of important system files. Please double-check your
    typing when using sudo. Type "man sudo" for more information.
    To proceed, enter your password, or type Ctrl-C to abort.
    Password:
    com.microsoft.office.licensing.helper
    com.google.keystone.daemon
    com.adobe.versioncueCS3
    Kristen-Scotts-Computer:~ kristenscott$ launchctl list | sed 1d | awk ' !/0x|apple|edu\.|org\./ { print $3 } '
    com.google.keystone.root.agent
    com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae
    Kristen-Scotts-Computer:~ kristenscott$ ls -1A {,/}Library/{Ad,Compon,Ex,Fram,In,La,Mail/Bu,P*P,Priv,Qu,Scripti,Sta}* 2> /dev/null
    /Library/Components:
    /Library/Extensions:
    /Library/Frameworks:
    Adobe AIR.framework
    NyxAudioAnalysis.framework
    PluginManager.framework
    iLifeFaceRecognition.framework
    iLifeKit.framework
    iLifePageLayout.framework
    iLifeSQLAccess.framework
    iLifeSlideshow.framework
    /Library/Input Methods:
    /Library/Internet Plug-Ins:
    AdobePDFViewer.plugin
    Disabled Plug-Ins
    Flash Player.plugin
    Flip4Mac WMV Plugin.plugin
    Flip4Mac WMV Plugin.webplugin
    Google Earth Web Plug-in.plugin
    JavaPlugin2_NPAPI.plugin
    JavaPluginCocoa.bundle
    Musicnotes.plugin
    NP-PPC-Dir-Shockwave
    Quartz Composer.webplugin
    QuickTime Plugin.plugin
    Scorch.plugin
    SharePointBrowserPlugin.plugin
    SharePointWebKitPlugin.webplugin
    flashplayer.xpt
    googletalkbrowserplugin.plugin
    iPhotoPhotocast.plugin
    npgtpo3dautoplugin.plugin
    nsIQTScriptablePlugin.xpt
    /Library/LaunchAgents:
    com.google.keystone.agent.plist
    /Library/LaunchDaemons:
    com.adobe.versioncueCS3.plist
    com.apple.third_party_32b_kext_logger.plist
    com.google.keystone.daemon.plist
    com.microsoft.office.licensing.helper.plist
    /Library/PreferencePanes:
    Flash Player.prefPane
    Flip4Mac WMV.prefPane
    VersionCue.prefPane
    VersionCueCS3.prefPane
    /Library/PrivilegedHelperTools:
    com.microsoft.office.licensing.helper
    /Library/QuickLook:
    GBQLGenerator.qlgenerator
    iWork.qlgenerator
    /Library/QuickTime:
    AppleIntermediateCodec.component
    AppleMPEG2Codec.component
    Flip4Mac WMV Export.component
    Flip4Mac WMV Import.component
    Google Camera Adapter 0.component
    Google Camera Adapter 1.component
    /Library/ScriptingAdditions:
    Adobe Unit Types
    Adobe Unit Types.osax
    /Library/StartupItems:
    AdobeVersionCue
    HP Trap Monitor
    Library/Address Book Plug-Ins:
    SkypeABDialer.bundle
    SkypeABSMS.bundle
    Library/Internet Plug-Ins:
    Move_Media_Player.plugin
    fbplugin_1_0_1.plugin
    Library/LaunchAgents:
    com.adobe.ARM.202f4087f2bbde52e3ac2df389f53a4f123223c9cc56a8fd83a6f7ae.plist
    com.apple.FolderActions.enabled.plist
    com.apple.FolderActions.folders.plist
    Library/PreferencePanes:
    A Better Finder Preferences.prefPane
    Kristen-Scotts-Computer:~ kristenscott$

  • AirPlay Screen Mirroring (Mavericks) disconnects frequently with "Feedback taking too long to send" message

    My company has several TVs with AppleTVs (3rd generation units) connected in our conference rooms so we can "Screen Mirror" our Mac laptops via AirPlay during meetings. Many employees have complained that AirPlay Screen Mirroring drops frequently during meetings for no apparent reason.
    In attempts to determine the cause of the issue, I removed the AppleTV units from Wi-Fi and hard-wired them all to the LAN (100Mbps/Full duplex, no switchport errors seen on the Cisco switch). I upgraded the AppleTVs to the latest firmware. I had our AppleTV users ensure they were running MacOS Mavericks with the latest software updates installed. I had the Mac laptops hard-wired into the LAN during meetings in the conference rooms. None of these changes resolved the AirPlay issue.
    I reviewed the MacOS "/var/log/system.log" file from the laptops of several users that reported issues. I found a pattern that seemed to indicate that the "coreaudiod" process reported "Feedback taking too long to send" several times before the AppleTV connection was terminated. Also, from a network trace (using "tcpdump") taken during an unexpected AirPlay Screen Mirroring disconnection, I could see that the Mac laptop sent a TCP FIN packet to the AppleTV unit (this would indicate that the MacOS laptop initiated the closing of the AirPlay connection).
    I have included the relevant log file entries below. Please note that the LAN internal to our company is "solid" and there have been no connectivity issues detected or reported during the times the AirPlay sessions were disconnected.
    I believe I have found a workaround to this issue. By going into "System Preferences", "Sound" and then changing the "Output" device BACK to the "Internal Speakers" (rather than the AirPlay destination), the AirPlay Screen Monitoring connection seems to remain stable.
    My questions are:
    - is anyone else experiencing this type of problem? any other solutions recommended?
    - is there a way to change the AirPlay defaults so that Screen Mirroring only sends the video (not audio)?
    - does anyone know what the log file entries indicate (like, what does "Feedback taking too long to send...." mean)?
    - any fix planned for this issue?
    From: "/var/log/system.log":
    Jan 16 10:50:16 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:16.454404 AM [AirPlay] ### Feedback taking too long to send (1 seconds, 1 total)
    Jan 16 10:50:18 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:18.524517 AM [AirPlay] ### Feedback taking too long to send (4 seconds, 2 total)
    Jan 16 10:50:20 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:20.533639 AM [AirPlay] ### Feedback taking too long to send (6 seconds, 3 total)
    Jan 16 10:50:22 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:22.548168 AM [AirPlay] ### Feedback taking too long to send (8 seconds, 4 total)
    Jan 16 10:50:24 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:24.554522 AM [AirPlay] ### Feedback taking too long to send (10 seconds, 5 total)
    Jan 16 10:50:24 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:24.554809 AM [AirPlay] ### Report network status (3, en0) failed: 1/0x1 kCFHostErrorHostNotFound / kCFStreamErrorSOCKSSubDomainVersionCode / kCFStreamErrorSOCKS5BadResponseAddr / kCFStreamErrorDomainPOSIX / evtNotEnb / siInitSDTblErr / kUSBPending / dsBusError / kStatusIsError / kOTSerialSwOverRunErr / cdevResErr / EPERM
    Jan 16 10:50:26 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:26.545531 AM [AirPlay] ### Feedback taking too long to send (12 seconds, 6 total)
    Jan 16 10:50:28 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:28.559050 AM [AirPlay] ### Feedback taking too long to send (14 seconds, 7 total)
    Jan 16 10:50:30 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:30.628868 AM [AirPlay] ### Feedback taking too long to send (16 seconds, 8 total)
    Jan 16 10:50:32 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:32.655638 AM [AirPlay] ### Feedback taking too long to send (18 seconds, 9 total)
    Jan 16 10:50:34 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:34.641952 AM [AirPlay] ### Feedback taking too long to send (20 seconds, 10 total)
    Jan 16 10:50:36 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:36.659854 AM [AirPlay] ### Feedback taking too long to send (22 seconds, 11 total)
    Jan 16 10:50:38 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:38.653594 AM [AirPlay] ### Feedback taking too long to send (24 seconds, 12 total)
    Jan 16 10:50:40 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:40.659279 AM [AirPlay] ### Feedback taking too long to send (26 seconds, 13 total)
    Jan 16 10:50:42 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:42.745549 AM [AirPlay] ### Feedback taking too long to send (28 seconds, 14 total)
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.532853 AM [AirPlay] ### Endpoint "AppleTV" feedback error: -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533151 AM [AirPlay] ### Feedback failed: -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533273 AM [AirPlay] ### Error with endpoint "AppleTV": -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533427 AM [BonjourBrowser] Reconfirming PTR for AppleTV._airplay._tcp.local. on en0
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533588 AM [BonjourBrowser] Reconfirming PTR for 9C207BBD8EA1@AppleTV._raop._tcp.local. on en0
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533839 AM [AirPlay] ### AirPlay report: Network dead for 10+ seconds after 159 seconds, screen, nm "AppleTV", tp WiFi, md AppleTV3,1, sv 190.9, rt 0, fu 0, rssi -54
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.534104 AM [AirPlay] ### Report network status (5, en0) failed: 1/0x1 kCFHostErrorHostNotFound / kCFStreamErrorSOCKSSubDomainVersionCode / kCFStreamErrorSOCKS5BadResponseAddr / kCFStreamErrorDomainPOSIX / evtNotEnb / siInitSDTblErr / kUSBPending / dsBusError / kStatusIsError / kOTSerialSwOverRunErr / cdevResErr / EPERM
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.534315 AM [AirPlay] Deactivating virtual display stream for quiesce
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543682 AM [AirPlayScreenClient] Stopping session
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543815 AM [AirPlay] Quiescing endpoint 'AppleTV'
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543907 AM [AirPlayScreenClient] Stopping session internal
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.544218 AM [AirPlayScreenClient] Stopped session internal
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.544266 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.544297 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.553084 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.554904 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.557604 AM [AirPlayAVSys] Ignoring route away when AirPlay not current
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.560307 AM [AirPlayAVSys] Ignoring route away when AirPlay not current
    Jan 16 10:50:44 My-MacBook-Pro.local WindowServer[89]: Display 0x04280880: GL mask 0x21; bounds (0, 0)[1920 x 1080], 62 modes available
    Jan 16 10:50:44 My-MacBook-Pro.local WindowServer[89]: GLCompositor: GL renderer id 0x01022727, GL mask 0x0000001f, accelerator 0x00004ccb, unit 0, caps QEX|MIPMAP, vram 2048 MB
    I am happy to provide more information if needed.
    Thank you.
    -Tim

    I'm currently still experiencing this as well. I've confirmed it occurs on 10.9.1, 10.9.2, and 10.9.3 on MacBook Pro Retina's, Mid-2012 and 2013 MacBooks. It happens on multiple ATV's not just one, all are updated to 6.1.1 and a simple reboot seems to fix it temporarily but it does come back. All the ATV's are connecting to the network via Wireless not Ethernet. These are 3rd Gen ATV's but I checked the serial number and these do not match the bad batch of Apple TV's from 2013 that were offered up for Replacement for Apple due to the bad firmware update. None of the computers have the Firewall turned on. Here's the two logs that we always find after the issue occurs (logs are recent, happened this morning):
    5/30/14 8:57:36.017 AM coreaudiod[183]: 2014-05-30 08:57:36.016946 AM [AirPlay] ### Feedback taking too long to send (30 seconds, 17 total)
    5/30/14 8:57:36.332 AM coreaudiod[183]: 2014-05-30 08:57:36.331492 AM [AirPlay] ### Feedback failed: -6723/0xFFFFE5BD kCanceledErr
    The user will get disconnected from Airplay anywhere between 30 seconds to 3 minutes after logging on and can reconnect but then once again get disconnected after the same time period. One interesting thing to note is that when the Feedback Taking Too Long to send error starts occuring and the countdown to disconnect start ticking down to 30, its solely referring to Audio not being sent over the network and Video is working just fine. If I try to play sound I get another log and the sound doesn't play through the speakers. After a reboot, sound works fine and the Feedback Error's do not show up. I've also tried switching to Internal Speakers (since it defaultly switches to Airplay Speakers) after connecting to Airplay and seeing the Feedback timer start in the Console Logs but even after that the log continues to saw its taking too long to send and disconnects in 30 seconds. 
    This issue has been ongoing for months, I've got a ticket logged as far back as January with this occuring but its infrequent enough that we've just restarted and moved on. I'd say its an issue that occurs to about 5%-10% of meetings but that's an entire meeting that doesn't have the ability to Airplay until someone comes down and reboots it.
    I don't often post in this forum but its still an active issue with no resolution, proof that its occuring on other people's systems, and no firmware updates having been released to correct it. It'd be nice to know of any workarounds other than having to buy some lamp timers for each conference room just to get a functional ATV or putting up a sign that says hey if you get disconnected every 3 minutes, reboot the ATV. The whole reason we're using Apple products is for ease of use otherwise I'd put together a much cheaper solution myself. Any help or recommended troubleshooting steps would be fantastic at this point.

  • Attribute Change run taking too long time to complete.

    Hi all,
    Attribute change run has been taking too long time to complete.It has to realign 50 odd aggreagates, some by delta , some by reconstruction. But inspite of all the aggregates it used to finish in quick time earlier. But since last 4-5 days it is taking indefinite time to finish.
    Can anyone please suggest what all reasons may be causing this? and what possibly can be the solution to the problem? It is becoming a big issue. So kindly help with ur advises.
    Promise to reward your answer liberally.
    Regards,
    Pradyut.

    Hi,
    Check with your functional owners in R/3 if there are mass changes/realignments or classification changes are going on regarding master data. e.g. reasigning materials to other material groups. This causes a major realignment in BW for all the aggregates. Otherwise check for parameterchanges / patches or missing db stats with your sap basis team.
    Kind regards, Patrick Rieken.

  • WebI Report is taking too long time to opening

    Hi All,
    When iam trying to open the WebI report in Infoview , it is taking long time to open and refresh,
    Please suggest me a solution.
    Thanks in advance..
    Regards,
    Mahesh

    Hi,
    As the issue you are facing is that the webi report is taking too long to open and refresh, I would recommend the below steps.
    1. Check whether the webi report is set to "Refreh on Open" if yes probably you need to uncheck, save the report and open it again.
    2. Try to run the same query in the backend database and see if it returns the data.
    3. Try to run refresh the report for a smaller data selection.
    4. make the report run on a specific webi server, and when refreshing have your BOBJ admin monitor that process to see if the process is going in a hung state, using High memory etc.
    5. restart webi process and run again
    Thanks,
    aKs

  • Database open (recovery) taking too long

    Hi,
    Ive been using your awesome BerkeleyDB Java Edition for a couple of years, and have been very happy with it.
    I am currently facing an issue with trying to open the database after a disk-full issue (which resulted in the database being unable to write, and hence not closed properly).
    While recovery seems to be operating, it has been taking an inordinate amount of time - 16 hours so far. My database has data of around 200GB, which inflated to over 450GB during deletion of entries, hence gobbling up all free space on disk.
    My questions are:
    * Should i continue to wait for recovery?
    * Is there any chance that recovery is looping?
    * Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?
    Some other information that may help:
    * The recovery has decreased the size of the last significant file, and created 3 new files since it started running.
    * I have been monitoring the open files (using lsof), and they change every now and then to other files, though a good amount of its time is spent near the end of the database.
    Thus, i feel like recovery is running normally, just taking too long. Please let me know your opinion.
    A few other things i should mention regarding my issue:
    * The database was, till yesterday, running on bdb java 3.3.75. After running several hours of recovery, i upgraded to 4.1.10 (since i read about a possible recovery looping bug in one of the versions)
    * Once 4.1.10 started recovery, it spat out errors regarding the last 2 files. Only on deleting those 2 files (the last being 0 bytes, the 2nd-last being about 5k) did the recovery start. Note that the older 3.3.75's recovery never complained about those files. I can post the errors here if relevant.
    * Some of the jdb files (about 500 files out of the 47,000 files that make up the database) are 100 MB files, since i had experimented with larger sized files for a few days, then reverted the setting.
    Would any of these above affect a successful recovery?
    My setup is:
    OS:Linux CentOS 5.2, 64-bit, kernel 2.6.18-92.el5
    JVM: Sun Java 1.6.0_20, 64-bit
    Memory: 16 GB RAM, of which 8 GB is allocated to the java process (-Xmx8000M -Xms8000M)
    BDB cache set to use 6GB RAM (envconfig.setCacheSize(6000000000))
    Only the BDB basic API is being used (Environment, database, cursors). We do not use DPL, or HA features.
    Awaiting your kind response,
    Sushant A

    Hi Sushant,
    * Should i continue to wait for recovery?* Is there any chance that recovery is looping?>
    I'm not aware of a bug that would cause recovery to loop, however, you may want to take thread dumps to see if it is progressing. It isn't easy to tell, however, since each phase of recovery is in fact a loop. What you can tell easily from the thread dumps is whether recovery is blocked (completely stopped) for some reason. I don't know of a bug that would cause this, but it's something I would check for.
    Assuming it is not blocked, I suggest that you leave recovery running, and additionally (in parallel) try to obtain some information about your log. While recovery is running you can run the DbPrintLog utility, which does not itself run recovery. I suggest running the following command, which will tell us in general what your log looks like and in particular how far apart the checkpoints are:
    java -jar je-x.y.z.jar DbPrintLog -h <envHome> -S > <output>Please post the output.
    If checkpoints are not running in your application for some reason, or they are running very infrequently, this can cause VERY long recoveries. Unfortunately, you may have such a problem in your app and not be aware of it, until you crash and have to recover. To guard against this sort of thing in the future, you should keep an eye on the checkpoint frequency. EnvironmentStats.getNCheckpoints and getEndOfLog can together be used to tell how much log is written between checkpoints. We will also be able to see this from the DbPrintLog -S output.
    * Is there an easier way (DBDump?) to extract data from the database without having to perform recovery?DbDump normally runs recovery. DbDump with the -r or -R option does not run recovery, but has other drawbacks. With -r, a large amount of memory may be necessary to dump an accurate representation of your data set. If this fails because you run out of memory, -R can be used, but this will dump multiple versions of each record and it will be up to you to interpret the output.
    If regular recovery does not succeed, then DbDump -r is the next thing to try.
    Would any of these above affect a successful recovery?No, I don't believe so.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for

  • External Lacie drive suddenly became "unreadable by this computer"!!

    Probably not the wisest thing to do, but I've just used a 160G Lacie drive to store ALL of the dv files and STP audio mix files for a 2hr project (and unfortunately one of the 3hr source clips), and the above message came up in red when I turned it b

  • Se16 output is not shown in English language

    Hi Gurus, User reported an issue in which he is not able to get the Output from se16 T_code in english , he is getting the output in german language, The strange fact is that its only happening for this T_code only  ? Can you please guide me in resol

  • How to change the owner of a folio

    I've created a folio using the latest version of DPS. My client, who has purchased the Professional version of DPS, is going to distribute the app as an "in-house" iPad viewer app (they're an Apple Enterprise Developer). I've shared the folio with se

  • Less roundabout keySet filter for keys

    Hi, I'm attempting to use a filter to query keys in a partitioned cache that match a certain class type. keySet() by default appears to return to the filter the values of the entry. The same appears true of entrySet() as well. I want my filter to eva

  • Is anybody NOT having a problem with G2 touch's WiFi connection?

    I am disappointed that after eagerly anticipating the "Lets Rock" event and the release of a new iPod Touch, with the intention of purchasing one now, it seems that from this and other forums there is a major problem with the WiFi on the Gen 2. There