Data Copy Issue

Hi All,
I have migrated Hyperion Planning from 9.3.1 to 11.1.2 , now i want get the data from 9.3.1 , I want to copy the page and index files from 9.3.1 . I have copied .ind,.pag,.tct,.esm files from 9.3.1 to 11.1.2 . But I dont see the data in 11.1.12 ? When i go to the database properties in the 11.1.2 , I am able to see the statistics of blocks and all which are same as 9.3.1 . But when i go into webforms i dont see any data ? please help
Please let me know if I am missing any file or anything
Thank You
MP

Let me prefix by saying I'm not completely up on the architecture of 11.
That being said, in the old days of 6.51, yes you needed to copy the .db and .dbb if it exists.
But it would be better to export data (level 0 would make for smaller export files, but you would need to calc/agg them up also) and import it into new apps.
Robert

Similar Messages

  • How to get the previous state of my data after issuing coomit method

    How to get the previous state of some date after issuing commit method in entity bean (It should not use any offline storage )

    >
    Is there any way to get the state apart from using
    offline storage ?As I said the caller keeps a copy in memory.
    Naturally if it is no longer in memory then that is a problem.
    >
    and also what do you mean by auditlog?
    You keep track of every change to the database by keeping the old data. There are three ways:
    1. Each table has a version number/delete flag for each record. A record is never updated nor deleted. Instead a new record is created with a new version number and with the new data.
    2. Each table has a duplicate table which has all of the same columns. When the first table is modified the old data is moved to the duplicate table.
    3. A single table is used which has columns for 'table', 'field', 'data' and 'activity' (update, delete). When a change is made in any table then this table is updated. This is generally of limited useability due to the difficulty in recovering the data.
    All of the above can have a user id, timestamp, and/or additional information which is relevant to the data being changed.
    Note that ALL of this is persisted storage.
    I am not sure what this really has to do with "offline storage" unless you are using that term to refer to backed up data which is not readily available.

  • Data copy in Hyperion Planning taking long time

    Hi All,
    good morning.
    I am using data copy in hyperion planning (11 1 2 2) to copy from one scenario to other selecting account annotations and supporting details.
    The essbase part of copying data completed (checked sessions), but when I checked the job console in planning it still says processing for past 2 hrs.
    My java heap size for planning is 1.5GB and the backed database is SQL server
    my suspicion is that the issue was with backend sql. but I don't where to start.
    can anyone please guide me.

    I am working with version 11.1.2.1 and running into the same issue. The Supporting detail option works fine if the application has just a few details, let’s say 50 cells, but if we have around 500 cell with details then the copy process never ends, the details are not copied, and I have to restart the Planning service. As a workaround we are using a two steps process. In the first step we copy the just Essbase using the copy data functionality of Planning (with no data copy options enabled) or by using an Essbase calculation script. For the Supporting Details piece we use the Export for Edit functionality of LCM and export them to an XML file, and then we edit the XML file and change the source member name with the target member name, and finally we use the Import after Edit functionality of LCM. Of course if works only if the Planning application was deployed using EPMA.

  • Data Copy between entities

    Hi all,
    I have an issue in data copy between entities and scenarios.
    I have the raw data in entities 1,2,3 etc and Scenario A. I need to copy this data to entities 4,5,6 and Scenario B.
    S#A.E#1 = S#B.E#4
    S#A.E#2 = S#B.E#5
    S#A.E#3 = S#B.E#6
    When i do this using HS.EXP..I get and error message saying that "Invalid destination specified"
    But it is invalid to use Entity on the LHS of HS.EXP...How do i go about solving this problem?
    Any help will be greatly appreciated.
    Thanks!
    Ramith

    Ramith
    You would need to do something like the following to get around this as you can specify items outside of the current subcube on the left side of the rules
    If HS.Scenario.Member = "A" Then
    If HS.Entity.Member ="2" Then
    hs.exp = "A#ALL = S#B.E#4"
    End if
    End If
    JTF

  • Re:How to Improve Data Copy Performance

    Is there a way to tell a data copy to ignore any #Missing values and only copy values that re non #Missing. Looking towards improving my DataCopy script besides just utlizing fix statements.
    Please provide any suggestions in optimizing a data copy script.
    Thanks for your Help!!!!

    I'd suggest having a look at the documentation for the configuration file setting COPYMISSINGBLOCK. This prevents the creation of #Missing blocks in the destination set when copying from a dense dimension.
    Obviously, within an entire block (e.g. when copying from a sparse dimension) a #Missing value is just a #Missing value - you can't do anything about that.
    As a side issue, you may need to consider that different calculation results can be produced by some calculations depending on whether or not blocks exist.

  • Windows 8.1 Data reordering issue with Intel Adaptors

    According to Intel, there is a data reordering issue with their adaptors and probably this dumb WIDI software. This is from Intel site. they say some are fixed, "A future Windows 8 fix will address this issue for other Intel wireless adapters." I
    have one Nope, still broke. I get drops all the time. Brand new Toshiba laptop I7 16 gigs of ram and a SSD and a 2 gig Vid card. Would be nice to be able to play games but I get dropped all the time. Now would Microsoft quit hiding
    about this, and fix the darn thing. Also i'm a system admin for 13 years. I have build over 1000 PCs and servers. I know bad software. Please fix this. PLEASE. Its not going to just go away and its not just Toshiba, I have seen other companies with the
    same problem. If there is a Fix PLEASE POST IT. Or even a workaround I have tried everything.
    http://www.intel.com/support/wireless/wlan/sb/CS-034535.htm
     

    Hi,
    Have your first tried the software fix under this link for your network adapter?
    http://www.intel.com/support/wireless/wtech/proset-ws/sb/CS-034041.htm
    Please Note: The third-party product discussed here is manufactured by a company that is independent of Microsoft. We make no warranty, implied or otherwise, regarding this product's performance or reliability.
    Also, you can try to check if there is any driver update under Device manager from manufacture's website.
    Kate Li
    TechNet Community Support
    Yep didn't work. Still get drops all the time, had to run a Cat 5E cable to my laptop from my modem, because I have Atheros Gigabyte Lan adaptor. Works Great. The Wireless still drops all the time. Has Microsoft let out the patch to fix this or is it coming in
    April in the 8.1 patch that's coming. Funny thing is all for Widi, I don't even use widi, I got the software to do that from Samsung works better on my TV. Intel and Microsoft need to get this fixed. because their driving off gamers and that's the
    people that make sure they buy Microsoft so they can play games. With the wireless link dead and a great laptop worthless what's the point. Ive been in IT for 13 years building PCs  and Servers how I knew how to run a 60 FT Cat 5e line thru
    a 2 story house and terminate it. Most people don't. Fix the problem.  

  • Data Load Issue "Request is in obsolete version of DataSource"

    Hello,
    I am getting a very strange data load issue in production, I am able to load the data upto PSA, but when I am running the DTP to load the data into 0EMPLOYEE ( Master data Object) getting bellow msg
    Request REQU_1IGEUD6M8EZH8V65JTENZGQHD not extracted; request is in obsolete version of DataSource
    The request REQU_1IGEUD6M8EZH8V65JTENZGQHD was loaded into the PSA table when the DataSource had a different structure to the current one. Incompatible changes have been made to the DataSource since then and the request cannot be extracted with the DTP anymore.
    I have taken the follwoing action
    1. Replicated the data source
    2. Deleted all request from PSA
    2. Activated the data source using (RSDS_DATASOURCE_ACTIVATE_ALL)
    3. Re transported the datasource , transformation, DTP
    Still getting the same issue
    If you have any idea please reply asap.
    Samit

    Hi
    Generate your datasource in R/3 then replicate and activate the transfer rules.
    Regards,
    Chandu.

  • ORA-01403 No Data Found Issue

    Hi,
    Im very new to streams and having a doubt regarding ORA-01403 issue happening while replication. Need you kind help on this regard. Thanks in advance.
    Oracle version : 10.0.3.0
    1.Suppose there are 10 LCRs in a Txn and one of the LCR caused ORA-01403 and none of the LCRs get executed.
    We can read the data of this LCR and manually update the record in the Destination database.
    Eventhough this is done, while re-executing the transaction, im getting the same ORA-01403 on the same LCR.
    What could be the possible reason.
    Since, this is a large scale system with thousands of transactions, it is not possible to handle the No data found issues occuring in the system.
    I have written a PL/SQL block which can generate Update statements with the old data available in LCR, so that i can re-execute the Transaction again.
    The PL/SQL block is given below. Could you please check if there are any issues in this while generating the UPDATE statements. Thank you
    /* Formatted on 2008/10/23 14:46 (Formatter Plus v4.8.7) */
    --Script for generating the Update scripts for the Message which caused the 'NO DATA FOUND' error.
    DECLARE
    RES NUMBER; --No:of errors to be resolved
    RET NUMBER; --A number variable to hold the return value from getObject
    I NUMBER; --Index for the loop
    J NUMBER; --Index for the loop
    K NUMBER; --Index for the loop
    PK_COUNT NUMBER; --To Hold the no:of PK columns for a Table
    LCR ANYDATA; --To Hold the Logical Change Record
    TYP VARCHAR2 (61); --To Hold the Type of a Column
    ROWLCR SYS.LCR$_ROW_RECORD; --To Hold the LCR caused the error in a Txn.
    OLDLIST SYS.LCR$_ROW_LIST; --To Hold the Old data of the Record which was tried to Update/Delete
    NEWLIST SYS.LCR$_ROW_LIST;
    UPD_QRY VARCHAR2 (5000);
    EQUALS VARCHAR2 (5) := ' = ';
    DATA1 VARCHAR2 (2000);
    NUM1 NUMBER;
    DATE1 TIMESTAMP ( 0 );
    TIMESTAMP1 TIMESTAMP ( 3 );
    ISCOMMA BOOLEAN;
    TYPE TAB_LCR IS TABLE OF ANYDATA
    INDEX BY BINARY_INTEGER;
    TYPE PK_COLS IS TABLE OF VARCHAR2 (50)
    INDEX BY BINARY_INTEGER;
    LCR_TABLE TAB_LCR;
    PK_TABLE PK_COLS;
    BEGIN
    I := 1;
    SELECT COUNT ( 1)
    INTO RES
    FROM DBA_APPLY_ERROR;
    FOR TXN_ID IN
    (SELECT MESSAGE_NUMBER,
    LOCAL_TRANSACTION_ID
    FROM DBA_APPLY_ERROR
    WHERE LOCAL_TRANSACTION_ID =
    '2.85.42516'
    ORDER BY ERROR_CREATION_TIME)
    LOOP
    SELECT DBMS_APPLY_ADM.GET_ERROR_MESSAGE
    (TXN_ID.MESSAGE_NUMBER,
    TXN_ID.LOCAL_TRANSACTION_ID
    INTO LCR
    FROM DUAL;
    LCR_TABLE (I) := LCR;
    I := I + 1;
    END LOOP;
    I := 0;
    K := 0;
    dbms_output.put_line('size >'||lcr_table.count);
    FOR K IN 1 .. RES
    LOOP
    ROWLCR := NULL;
    RET :=
    LCR_TABLE (K).GETOBJECT
    (ROWLCR);
    --dbms_output.put_line(rowlcr.GET_OBJECT_NAME);
    PK_COUNT := 0;
    --Finding the PK columns of the Table
    SELECT COUNT ( 1)
    INTO PK_COUNT
    FROM ALL_CONS_COLUMNS COL,
    ALL_CONSTRAINTS CON
    WHERE COL.TABLE_NAME =
    CON.TABLE_NAME
    AND COL.CONSTRAINT_NAME =
    CON.CONSTRAINT_NAME
    AND CON.CONSTRAINT_TYPE = 'P'
    AND CON.TABLE_NAME =
    ROWLCR.GET_OBJECT_NAME;
    dbms_output.put_line('Count of PK Columns >'||pk_count);
    DEL_QRY := NULL;
    DEL_QRY :=
    'DELETE FROM '
    || ROWLCR.GET_OBJECT_NAME
    || ' WHERE ';
    INS_QRY := NULL;
    INS_QRY :=
    'INSERT INTO '
    || ROWLCR.GET_OBJECT_NAME
    || ' ( ';
    UPD_QRY := NULL;
    UPD_QRY :=
    'UPDATE '
    || ROWLCR.GET_OBJECT_NAME
    || ' SET ';
    OLDLIST :=
    ROWLCR.GET_VALUES ('old');
    -- Generate Update Query
    NEWLIST :=
    ROWLCR.GET_VALUES ('old');
    ISCOMMA := FALSE;
    FOR J IN 1 .. NEWLIST.COUNT
    LOOP
    IF NEWLIST (J) IS NOT NULL
    THEN
    IF J <
    NEWLIST.COUNT
    THEN
    IF ISCOMMA =
    TRUE
    THEN
    UPD_QRY :=
    UPD_QRY
    || ',';
    END IF;
    END IF;
    ISCOMMA := FALSE;
    TYP :=
    NEWLIST
    (J).DATA.GETTYPENAME;
    IF (TYP =
    'SYS.VARCHAR2'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETVARCHAR2
    (DATA1
    IF DATA1 IS NOT NULL
    THEN
    UPD_QRY :=
    UPD_QRY
    || NEWLIST
    (J
    ).COLUMN_NAME;
    UPD_QRY :=
    UPD_QRY
    || EQUALS;
    UPD_QRY :=
    UPD_QRY
    || ' '
    || ''''
    || SUBSTR
    (DATA1,
    0,
    253
    || '''';
    ISCOMMA :=
    TRUE;
    END IF;
    ELSIF (TYP =
    'SYS.NUMBER'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETNUMBER
    (NUM1
    IF NUM1 IS NOT NULL
    THEN
    UPD_QRY :=
    UPD_QRY
    || NEWLIST
    (J
    ).COLUMN_NAME;
    UPD_QRY :=
    UPD_QRY
    || EQUALS;
    UPD_QRY :=
    UPD_QRY
    || ' '
    || NUM1;
    ISCOMMA :=
    TRUE;
    END IF;
    ELSIF (TYP =
    'SYS.DATE'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETDATE
    (DATE1
    IF DATE1 IS NOT NULL
    THEN
    UPD_QRY :=
    UPD_QRY
    || NEWLIST
    (J
    ).COLUMN_NAME;
    UPD_QRY :=
    UPD_QRY
    || EQUALS;
    UPD_QRY :=
    UPD_QRY
    || ' '
    || 'TO_Date( '
    || ''''
    || DATE1
    || ''''
    || ', '''
    || 'DD/MON/YYYY HH:MI:SS AM'')';
    ISCOMMA :=
    TRUE;
    END IF;
    ELSIF (TYP =
    'SYS.TIMESTAMP'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETTIMESTAMP
    (TIMESTAMP1
    IF TIMESTAMP1 IS NOT NULL
    THEN
    UPD_QRY :=
    UPD_QRY
    || ' '
    || ''''
    || TIMESTAMP1
    || '''';
    ISCOMMA :=
    TRUE;
    END IF;
    END IF;
    END IF;
    END LOOP;
    --Setting the where Condition
    UPD_QRY := UPD_QRY || ' WHERE ';
    FOR I IN 1 .. PK_COUNT
    LOOP
    SELECT COLUMN_NAME
    INTO PK_TABLE (I)
    FROM ALL_CONS_COLUMNS COL,
    ALL_CONSTRAINTS CON
    WHERE COL.TABLE_NAME =
    CON.TABLE_NAME
    AND COL.CONSTRAINT_NAME =
    CON.CONSTRAINT_NAME
    AND CON.CONSTRAINT_TYPE =
    'P'
    AND POSITION = I
    AND CON.TABLE_NAME =
    ROWLCR.GET_OBJECT_NAME;
    FOR J IN
    1 .. NEWLIST.COUNT
    LOOP
    IF NEWLIST (J) IS NOT NULL
    THEN
    IF NEWLIST
    (J
    ).COLUMN_NAME =
    PK_TABLE
    (I
    THEN
    UPD_QRY :=
    UPD_QRY
    || ' '
    || NEWLIST
    (J
    ).COLUMN_NAME;
    UPD_QRY :=
    UPD_QRY
    || ' '
    || EQUALS;
    TYP :=
    NEWLIST
    (J
    ).DATA.GETTYPENAME;
    IF (TYP =
    'SYS.VARCHAR2'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETVARCHAR2
    (DATA1
    UPD_QRY :=
    UPD_QRY
    || ' '
    || ''''
    || SUBSTR
    (DATA1,
    0,
    253
    || '''';
    ELSIF (TYP =
    'SYS.NUMBER'
    THEN
    RET :=
    NEWLIST
    (J
    ).DATA.GETNUMBER
    (NUM1
    UPD_QRY :=
    UPD_QRY
    || ' '
    || NUM1;
    END IF;
    IF I <
    PK_COUNT
    THEN
    UPD_QRY :=
    UPD_QRY
    || ' AND ';
    END IF;
    END IF;
    END IF;
    END LOOP;
    END LOOP;
    UPD_QRY := UPD_QRY || ';';
    DBMS_OUTPUT.PUT_LINE (UPD_QRY);
    --Generate Update Query - End
    END LOOP;
    END;

    Thanks for you replies HTH and Dipali.
    I would like to make some points clear from my side based on the issue i have raised.
    1.The No Data Found error is happening on a table for which supplemental logging is enabled.
    2.As per my understanding, the "Apply" process is comparing the existing data in the destination database with the "Old" data in the LCR.
    Once there is a mismatch between these 2, ORA-01403 is thrown. (Please tell me whether my understanding is correct or not)
    3.This mismatch can be on date field or even on the timestamp millisecond as well.
    Now, the point im really wondering about :
    Some how a mismatch got generated in the destination database (Not sure about the reason) and ORA-01403 is thrown.
    If we could update the Destination database with the "Old" data from LCR, this mismatch should be resolved isnt it?
    Reply to you Dipali :
    If nothing is working out, im planning to put a conflict handler for all tables with "OVERWRITE" option. With the following script
    --Generate script for applying Conflict Handler for the Tables for which Supplymentary Logging is enabled
    declare
    count1 number;
    query varchar2(500) := null;
    begin
    for tables in (
    select table_name from user_tables where table_name IN ("NAMES OF TABLES FOR WHICH SUPPLEMENTAL LOGGING IS ENABLED")
    loop
    count1 := 0;
    dbms_output.put_line('DECLARE');
    dbms_output.put_line('cols DBMS_UTILITY.NAME_ARRAY;');
    dbms_output.put_line('BEGIN');
    select max(position) into count1
    from all_cons_columns col, all_constraints con
    where col.table_name = con.table_name
    and col.constraint_name = con.constraint_name
    and con.constraint_type = 'P'
    and con.table_name = tables.table_name;
    for i in 1..count1
    loop
    query := null;
    select 'cols(' || position || ')' || ' := ' || '''' || column_name || ''';'
    into query
    from all_cons_columns col, all_constraints con
    where col.table_name = con.table_name
    and col.constraint_name = con.constraint_name
    and con.constraint_type = 'P'
    and con.table_name = tables.table_name
    and position = i;
    dbms_output.put_line(query);
    end loop;
    dbms_output.put_line('DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(');
    dbms_output.put_line('object_name => ''ICOOWR.' || tables.table_name|| ''',');
    dbms_output.put_line('method_name => ''OVERWRITE'',');
    dbms_output.put_line('resolution_column => ''COLM_NAME'',');
    dbms_output.put_line('column_list => cols);');
    dbms_output.put_line('END;');
    dbms_output.put_line('/');
    dbms_output.put_line('');
    end loop;
    end;
    Reply to u HTH :
    Our Destination database is a replica of the source and no triggers are running on any of these tables.
    This is not the first time im facing this issue. Earlier, we had to take big outage times and clear the Replica database and apply the dump from the source...
    Now i cant think about that situation.

  • 4G LTE data reception issue in area of work building

    Hi, I'm having a data reception issue in a certain area at work.  The signal indicator at the upper right of the homescreen shows "4GLTE" but this is clearly inaccurate since I am not able to navigate to websites or send/receive multimedia messages.  If I move ~30 feet east in the building, the reception is restored.  Two people with iPhone 5 devices have the same issue.  However, the Verizon iPhone 5 allows you to turn off LTE.  Once this was done and the signal fell back to 3G, reception was restored, albeit with slower speeds, but at least reception wasn't completely blocked.  I understand 4G is not available in all areas, but in this case, the phone is not automatically switching to 3G and there is no workaround because there is no option to turn off LTE on the Z10.  In the "Settings" -> "Network Connections" -> "Mobile Network" -> "Network Technology" dropdown, the only values are:
    UMTS/GSM (when I switch to this, no networks are found)
    Global (the current selection)
    LTE/CDMA
    This is a big problem for me because for 8+ hours in the day I can't receive MMS messages or navigate to websites.

    Hi, Nate650,
    Sorry to hear about your problem with 4G. First, let me ask, have you updated your Z10 to the latest official software version? I had a similar problem with my Z10. After about an hour on the phone with CS, we figured out it was a problem with the tower near me. The problem was fixed by VZW and I have not had connection issues. You are right, though, about the Z10 falling back to 3G. Mine did before the update but not since.
    Doc

  • Logical Standby Data Consistency issues

    Hi all,
    We have been running a logical standby instance for about three weeks now. Both our primary and logical are 11g (11.1.0.7) databases running on Sun Solaris.
    We have off-loaded our Discoverer reporting to the logical standby.
    About three days ago, we started getting the following error message (initially for three tables, but from this morning on a whole lot more)
    ORA-26787: The row with key (<coulmn>) = (<value>) does not exist in table <schema>.<table>
    This error implies that we have data consistency issues between our primary and logical standby databases, but we find that hard to believe
    because the "data guard" status is set to "standby", implying that schemas' being replicated by data guard are not available for user modification.
    any assistance in this regard would be greatly appreciated.
    thanks
    Mel

    It is a bug : Bug 10302680 . Apply the corresponding Patch 10302680 to your standby db.

  • How to get material's last posting date of issuing to production order?

    Hi,
    In my scenario, I need to get material's last posting date of issuing to production order (e.g. mov. typ. 261).
    I tried to select the material documents whose movement type is 261, and restrict the posting date from month to month each time, until the first material document is selected.
    But this method seems quite inefficient.
    What kind of algorithm is more effient to do this?
    Thanks
    Wesley

    Hi,
    select max( budat )
      from mkpf
      into gv_budat
      where mblnr in ( select mblnr
                         from aufm
                        where aufnr = gv_aufnr "(Prod. Order)
                            and  matnr = gv_matnr "(Issued Material)
                            and bwart = '261' ).
    Edited by: Azeem Ahmed Matte on Mar 12, 2010 12:33 PM

  • Copy Version-Budget data copied from one version to another-How to view Data in Copied Version.

    Dear All,
    Budget Data is copied from one Version to another Version using Tools>copy Version option. How can you view Data copied to new Version, when you receive a message Version is successfully copied.
    I think we can do that by selecting appropriate version in the version dimension while accessing the Forms or in Smart View.
    Can you please let me know how to do this or different options available with this process.
    Thanks in Advance for your valuable time...

    A form to check the data with the correct POV, a Smart View query, excel addin retrieve, financial report, export data, report script, take your pick.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Data copy B/W Essbase Applications using script

    How to Data copy from one app to another app.
    Server Name: Server1
    Version: 9.3
    Essbase App: App1
    Data Bases: Db1, Db2,Db3
    Server Name: Server1
    Version: 9.3
    Essbase App: App2
    Data Bases: Db1, Db2,Db3
    Note: App1 & App2 Applications are having similar outlines
    Requirement: Copy Year 2012 Data from App1 to App2
    I have come to know this is possible using Xref calc script.
    Could some one please suggest the script.
    Thanks in advance!

    Partitioning should be the best one as Glenn said, but see if you have licenses for that, as partitioning is licensed separately from essbase.
    The alternatives could be:
    * Data Export / Import: for exporting you could use DATAEXPORT command within a calcscript (see technical reference for details). Then importing the data with a rule file.
    * XREF: this approach could give some headaches with block creation (see https://cn.forums.oracle.com/forums/thread.jspa?threadID=1010153). In general terms, try this one if the portion of data to copy is relatively small and well delimited.
    Nacho.-

  • Has anyone found a solution for iPhone 5 data leak issues?

    Up until about a week ago I was using a 3GS and the data leak issues seemed to be fixed with the newest iOS 6 update. However, I recently got an iPhone 5 and I've noticed it uses around 1 MB per hour no matter what I'm actually doing on the phone. I actually went to sleep last night, turning of cellular data AND wifi and it STILL used about 4 MB of data!! What is up with this?? I am a pretty conservative user of data when not on wifi, but I'm only 2 days in to my bill cycle and already on pace to go over my 2 GB limit by the end of the month. Please help! I do not want to switch my plan and play more! I am on AT&T by the way.

    Have you tried these basic troubleshooting steps?
    Restart / Reset
    http://support.apple.com/en-us/HT201559
    Restore from backup
    Restore as new
    http://support.apple.com/en-us/HT201252
    If no joy, make an appointment with the Apple genius bar for an evaluation.

  • TileList data load issue

    I am having an issue where the data that drives a tilelist
    works correctly when the tile list is not loaded on the first page
    of the application. When it is put on a second page in a viewstack
    then the tilelist displays correctly when you navigate to it. When
    the tilelist is placed in the first page of the application I get
    the correct number of items to display in the tilelist but the
    information the item renderer is supposed to display, ie a picture,
    caption and title, does not. The strange thing is that a Tree
    populates correctly given the same situation. Here is the sequence
    of events:
    // get tree is that data for the tree and get groups is the
    data for the tilelist
    creationComplete="get_tree.send();get_groups.send();"
    <mx:HTTPService showBusyCursor="true" id="get_groups"
    url="[some xml doc]" resultFormat="e4x"/>
    <mx:XMLListCollection id="myXMlist"
    source="{get_groups.lastResult.groups}"/>
    <mx:HTTPService showBusyCursor="true" id="get_tree"
    url="[some xml doc]" resultFormat="e4x" />
    <mx:XMLListCollection id="myTreeXMlist"
    source="{get_tree.lastResult.groups}"/>
    And then the data provider of the tilelist and tree are set
    accordingly. I tried putting moving the data calls from the
    creation complete to the initialize event thinking that it would
    hit earlier in the process and be done by the time the final
    completion came about but that didn't help either. I guess I'm just
    at a loss as to why the tree works fine no matter where I put it
    but the TileList does not. It's almost like the tree and the
    tilelist will sit and wait for the data but the item renderer in
    the tilelist will not wait. Which would explain why clicking on the
    tile list still produces the correct sequence of events but the
    visual component of the tilelist is just not working right. Anyone
    have any ideas?

    Ok, so if ASO value is wrong, then its a data load issue and no point messing around with the BSO app. You are loading two transactions to the exact same intersection. Make sure your data load is set to aggregate values and not overwrite.

Maybe you are looking for