'Big Join' cause tablespace exceed

Hi guys, so basically I have 2 tables and 2 view i need to use to populate data into another table
table dwcustomer(
customer_id varchar2
customer_key number
city varchar2
state varchar2
postal_code varchar2
gender varchar2
table dwproducts(
product_key number,
product_id number
product_name,
weight,
All_Purchases_view(
customer_id varchar2
product_id,
shipping_charge
purchase_price
all_products_view(
product_id,
cost_price,
sell_price,
weight.....)
Now I need to populate data into table DWPURCHASE(cost_price,sell_price,product_key,customer_key,ship_key,time_key,purchase_price,shipping_charge) using values from tables above
So this is so far i got :
insert into dwpurchase(customer_key,shipping_charge,purchase_price,product_key,cost_price,sell_price,time_key,ship_key)
select dc.customer_key,pv.shipping_charge,pv.purchase_price,dp.product_key,pp.cost_price,pp.sell_price,0,0
from dwcustomer dc,all_purchases_view pv,
dwproduct dp,all_products_view pp
where dc.customer_id = pv.customer_id
and pv.product_id = dp.product_id
and pp.product_name = dp.product_name
and pv.source_rowid not in(select row_id from error_event where table_name = 'PURCHASESOURCE' and action = 3);
and yet I got this error:
ORA-01536: space quota exceeded for tablespace 'USERS'
01536. 00000 -  "space quota exceeded for tablespace '%s'"
*Cause:    The space quota for the segment owner in the tablespace has
           been exhausted and the operation attempted the creation of a
           new segment extent in the tablespace.
*Action:   Either drop unnecessary objects in the tablespace to reclaim
           space or have a privileged user increase the quota on this
           tablespace for the segment owner.
Any Help On The query will be much appreciate, as I can't change the space quota, and yep the earlier the better
Thanks guys

The error message means exactly what it says.
It appears you are tying to put 50 gallons of sewage into a 5 gallon bucket.  What kind of magic bullet do you expect from the forum?
"and yep the earlier the better"
This is a forum of volunteers.

Similar Messages

  • Tablespace exceed

    Hi,
    Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE     11.2.0.2.0     Production
    TNS for 32-bit Windows: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    My Scenario is like Below.
    I am trying to insert bulk records into a table which gives me the exception "ORA-01654: unable to extend index" because of tablespace exceed.
    Now i need to extend the tablespace and then insert again which will work.
    As, We have data which needs to be inserted in our hand and we have the tablespace size.
    so Before inserting into table, Is there anyway where we can find whether the data
    can be fitted into the tablespace size with out getting ORA-01654: unable to extend index" error.
    Hope the question is understandable. Any help would be appreciated.
    Regards,

    the first post mentions "unable to extend index" -- so you also need to check your indexes, there might be enough space in the table tablespace but not enough in the index tablespace
    to estimate the size needed, here is a quick fix:
    load a sample of the rows that you need to load
    analyze the table -- there is an "avg_row_len" in dba tables
    use that to estimate how much size you need to load all the data, make the tablespace that size, plus add 10% or so (for the pct_free)
    look at dba_indexes, there is leaf_blocks and distinct_keys, you can use that to estimate the number of blocks needed for your indexes
    this is a very simplified explanation and there is much more too it but that should be a quick way to get what you need
    Edited by: user12019680 on May 22, 2012 6:59 AM
    Edited by: user12019680 on May 22, 2012 7:03 AM

  • Join causing totals to fail

    I have multiple tables joined to a single master table and subtotaling works fine. However, when I join to a newly created table the totaling does not work - it just display sum: with no value. The join itself works as I am able to get data from both tables, just no subtotataling. I also tried changing the default aggregation settings with no luck.
    This happens whether I use Discoverer Desktop 10.1 or Discoverer Plus
    Why would a join cause subtotaling to stop working ?
    Thanks
    Stephen

    The tables are each setup as a different folder.
    Here is the select statement....
    SELECT O102980.CAL_445_YYYY, SUM(O102624.CHECK_AMOUNT)
    FROM PDW.DW_PO_CHECKS O102624, SDSGDW.DW_GLOBAL_CALENDAR_445 O102980
    WHERE ( ( O102624.CHECK_DATE = O102980.CAL_DATE ) ) AND ( O102980.CAL_DATE IS NOT NULL )
    GROUP BY O102980.CAL_445_YYYY
    Thanks
    Stephen

  • More small datafiles or one big datafile for tablespace

    Hello,
    I would like to create a new tablespace (about 4 GB). Could someone if it's better to create one big datafile or create 4 datafiles with 1 GB each?
    Thank you.

    It depends. Most of the time, it's going to come down to personal preference.
    If you have multiple data files, will you be able to spread them over multiple physical devices? Or do you have a disk subsystem that virtualizes many physical devices and spreads the data files across physical drives (i.e. a SAN)?
    How big is the database going to get? You wouldn't want to have a 1 TB database with 1000 1 GB files, that would be a monster to manage. You probably wouldn't want a 250 GB database with a single data file either, because it would take forever to recover the data file from tape if there was a single block corruption.
    Is there a data files size that fits comfortably in whatever size mount points you have? If you get 10 GB chunks of SAN at a time, for example, you would probably want data files that were an integer factor of that (i.e. 1, 2, or 5 GB) so that you can add similarly sized data files without wasting space and so that you can move files to a new mountpoint without worrying about whether they'll all fit.
    Does your OS support files of an appropriate size? I know Windows had problems a while ago with files > 2 GB (at least when files extended beyond 2 GB).
    In the end though, this is one of those things that probably doesn't matter too much within reason.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Urgent help please.  Inner Join caused ora-00933 error

    I ran this one , works fine:
    SELECT DISTINCT EXP.EXP_ID,
    EXP.DATU_EXP_WIRE_CENTER_CLLI,
    EXP.DATU_EXP_IP,
    EXP.DATU_EXP_CLLI,
    EXP.DATU_EXP_PORT,
    EXP.DATU_EXP_NAME,
    EXP.DATU_EXP_CITY,
    EXP.DATU_EXP_STATE,
    EXP.DATU_EXP_SW_VERSION,
    DECODE(LAST_ALARM.LAST_ALARM_DATE, NULL, TO_CHAR(SYSDATE,'YYYY/MM/DD HH24:MI:SS'),
         TO_CHAR(LAST_ALARM.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS STATUS_DATE,
    DECODE(LAST_ALARM.ALARM_NAME, NULL, 'Disconnected', LAST_ALARM.ALARM_NAME) AS DATU_STATUS,
    DECODE(LAST_ALARM.ALARM_CLASS, NULL, 'OTHER', LAST_ALARM.ALARM_CLASS) AS IS_ERROR_STATUS,
         DECODE(LAST_RESOURCE.LAST_ALARM_DATE, NULL, '', TO_CHAR(LAST_RESOURCE.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS RESOURCE_STATUS_DATE,
         DECODE(LAST_RESOURCE.RESOURCE_CODE_NAME, NULL, '', LAST_RESOURCE.RESOURCE_CODE_NAME) AS RESOURCE_STATUS,
         DECODE(LAST_RESOURCE.RESOURCE_CODE_CLASS, NULL, '', LAST_RESOURCE.RESOURCE_CODE_CLASS) AS IS_RESOURCE_ERROR_STATUS,
         DECODE(LAST_OPER.LAST_ALARM_DATE, NULL, '', TO_CHAR(LAST_OPER.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS OPER_STATUS_DATE,
         DECODE(LAST_OPER.OPER_CODE_NAME, NULL, '', LAST_OPER.OPER_CODE_NAME) AS OPER_STATUS,
         DECODE(LAST_OPER.OPER_CODE_CLASS, NULL, '', LAST_OPER.OPER_CODE_CLASS) AS IS_OPER_ERROR_STATUS,
    EXP.BEGIN_MAINT_WINDOW, RTU.RTU_NAME
    FROM TT_DATU_EXP_UNIT_INFO EXP
         left outer join
    ( SELECT distinct alarmed_datus.EXP_ID, c.ALARM_NAME, c.ALARM_TYPE, c.ALARM_CLASS, alarmed_datus.LAST_ALARM_DATE
    FROM ( SELECT EXP_ID, MAX(ALARM_TIME) AS LAST_ALARM_DATE FROM TT_DATU_EXP_ALARM_INFO GROUP BY EXP_ID ) alarmed_datus
    inner join TT_DATU_EXP_ALARM_INFO b on b.EXP_ID = alarmed_datus.EXP_ID AND b.ALARM_TIME = alarmed_datus.LAST_ALARM_DATE
    inner join TT_DATU_EXP_ALARM_TYPES c on b.ALARM_TYPE = c.ALARM_TYPE
    ) LAST_ALARM on EXP.EXP_ID = LAST_ALARM.EXP_ID
         left outer join
         ( SELECT distinct a.EXP_ID, c.RESOURCE_CODE_NAME, c.RESOURCE_CODE_TYPE, c.RESOURCE_CODE_CLASS, a.LAST_ALARM_DATE
         FROM ( SELECT EXP_ID, MAX(RESOURCE_CODE_TIME) AS LAST_ALARM_DATE
         FROM TT_DATU_EXP_RESOURCE_CODE_INFO GROUP BY EXP_ID ) a
    inner join TT_DATU_EXP_RESOURCE_CODE_INFO b on b.EXP_ID = a.EXP_ID AND b.RESOURCE_CODE_TIME = a.LAST_ALARM_DATE
    inner join TT_DATU_EXP_RESOURCECODE_TYPES c on b.RESOURCE_CODE_TYPE = c.RESOURCE_CODE_TYPE
         ) LAST_RESOURCE on EXP.EXP_ID = LAST_RESOURCE.EXP_ID
         left outer join
         ( SELECT distinct a.EXP_ID, c.OPER_CODE_NAME, c.OPER_CODE_TYPE, c.OPER_CODE_CLASS, a.LAST_ALARM_DATE
         FROM ( SELECT EXP_ID, MAX(OPER_CODE_TIME) AS LAST_ALARM_DATE
         FROM TT_DATU_EXP_OPER_CODE_INFO GROUP BY EXP_ID ) a
    inner join TT_DATU_EXP_OPER_CODE_INFO b on b.EXP_ID = a.EXP_ID AND b.OPER_CODE_TIME = a.LAST_ALARM_DATE
    inner join TT_DATU_EXP_OPER_CODE_TYPES c on b.OPER_CODE_TYPE = c.OPER_CODE_TYPE) LAST_OPER on EXP.EXP_ID = LAST_OPER.EXP_ID
    inner join TT_DATU_LRN_MAP LRNS on EXP.EXP_ID = LRNS.EXP_ID AND TRIM(LRNS.LRN) LIKE p_LRN
    inner join TT_RTU_TYPES RTU ON EXP.RTU_TYPE_ID = RTU.RTU_TYPE_ID
    WHERE NOT EXISTS (SELECT SATELLITE_EXP_ID FROM TT_HOST_SATELLITE WHERE EXP.EXP_ID = SATELLITE_EXP_ID)
    AND EXP.IS_PRIMARY_ADDRESS LIKE p_isPrimary;
         ELSE
         OPEN v_cursor FOR
    SELECT EXP.EXP_ID,
    EXP.DATU_EXP_WIRE_CENTER_CLLI,
    EXP.DATU_EXP_IP,
    EXP.DATU_EXP_CLLI,
    EXP.DATU_EXP_PORT,
    EXP.DATU_EXP_NAME,
    EXP.DATU_EXP_CITY,
    EXP.DATU_EXP_STATE,
    EXP.DATU_EXP_SW_VERSION,
    DECODE(LAST_ALARM.LAST_ALARM_DATE, NULL, TO_CHAR(SYSDATE,'YYYY/MM/DD HH24:MI:SS'), TO_CHAR(LAST_ALARM.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS STATUS_DATE,
    DECODE(LAST_ALARM.ALARM_NAME, NULL, 'Disconnected', LAST_ALARM.ALARM_NAME) AS DATU_STATUS,
    DECODE(LAST_ALARM.ALARM_CLASS, NULL, 'OTHER', LAST_ALARM.ALARM_CLASS) AS IS_ERROR_STATUS,
         DECODE(LAST_RESOURCE.LAST_ALARM_DATE, NULL, '', TO_CHAR(LAST_RESOURCE.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS RESOURCE_STATUS_DATE,
         DECODE(LAST_RESOURCE.RESOURCE_CODE_NAME, NULL, '', LAST_RESOURCE.RESOURCE_CODE_NAME) AS RESOURCE_STATUS,
         DECODE(LAST_RESOURCE.RESOURCE_CODE_CLASS, NULL, '', LAST_RESOURCE.RESOURCE_CODE_CLASS) AS IS_RESOURCE_ERROR_STATUS,
         DECODE(LAST_OPER.LAST_ALARM_DATE, NULL, '', TO_CHAR(LAST_OPER.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS OPER_STATUS_DATE,
         DECODE(LAST_OPER.OPER_CODE_NAME, NULL, '', LAST_OPER.OPER_CODE_NAME) AS OPER_STATUS,
         DECODE(LAST_OPER.OPER_CODE_CLASS, NULL, '', LAST_OPER.OPER_CODE_CLASS) AS IS_OPER_ERROR_STATUS,
    EXP.BEGIN_MAINT_WINDOW, RTU.RTU_NAME
    FROM TT_DATU_EXP_UNIT_INFO EXP
         left outer join (
    SELECT distinct alarmed_datus.EXP_ID, c.ALARM_NAME, c.ALARM_TYPE, c.ALARM_CLASS, alarmed_datus.LAST_ALARM_DATE
    FROM (SELECT EXP_ID, MAX(ALARM_TIME) AS LAST_ALARM_DATE FROM TT_DATU_EXP_ALARM_INFO GROUP BY EXP_ID ) alarmed_datus
         inner join TT_DATU_EXP_ALARM_INFO b on b.EXP_ID = alarmed_datus.EXP_ID AND b.ALARM_TIME = alarmed_datus.LAST_ALARM_DATE
         inner join TT_DATU_EXP_ALARM_TYPES c on b.ALARM_TYPE = c.ALARM_TYPE )
         LAST_ALARM on EXP.EXP_ID = LAST_ALARM.EXP_ID
         left outer join
              ( SELECT distinct a.EXP_ID, c.RESOURCE_CODE_NAME, c.RESOURCE_CODE_TYPE, c.RESOURCE_CODE_CLASS, a.LAST_ALARM_DATE
              FROM ( SELECT EXP_ID, MAX(RESOURCE_CODE_TIME) AS LAST_ALARM_DATE
              FROM TT_DATU_EXP_RESOURCE_CODE_INFO GROUP BY EXP_ID ) a
         inner join TT_DATU_EXP_RESOURCE_CODE_INFO b on b.EXP_ID = a.EXP_ID AND b.RESOURCE_CODE_TIME = a.LAST_ALARM_DATE
         inner join TT_DATU_EXP_RESOURCECODE_TYPES c on b.RESOURCE_CODE_TYPE = c.RESOURCE_CODE_TYPE) LAST_RESOURCE on EXP.EXP_ID = LAST_RESOURCE.EXP_ID
         left outer join
              ( SELECT distinct a.EXP_ID, c.OPER_CODE_NAME, c.OPER_CODE_TYPE, c.OPER_CODE_CLASS, a.LAST_ALARM_DATE
              FROM ( SELECT EXP_ID, MAX(OPER_CODE_TIME) AS LAST_ALARM_DATE
              FROM TT_DATU_EXP_OPER_CODE_INFO GROUP BY EXP_ID ) a
         inner join TT_DATU_EXP_OPER_CODE_INFO b on b.EXP_ID = a.EXP_ID AND b.OPER_CODE_TIME = a.LAST_ALARM_DATE
         inner join TT_DATU_EXP_OPER_CODE_TYPES c on b.OPER_CODE_TYPE = c.OPER_CODE_TYPE
              ) LAST_OPER on EXP.EXP_ID = LAST_OPER.EXP_ID ORDER BY EXP.DATU_EXP_CLLI
         inner join TT_RTU_TYPES RTU ON EXP.RTU_TYPE_ID = RTU.RTU_TYPE_ID
    WHERE NOT EXISTS (SELECT SATELLITE_EXP_ID FROM TT_HOST_SATELLITE WHERE EXP.EXP_ID = SATELLITE_EXP_ID) AND EXP.IS_PRIMARY_ADDRESS like
    p_isPrimary;
    However this one:
    SELECT EXP.EXP_ID,
    EXP.DATU_EXP_WIRE_CENTER_CLLI,
    EXP.DATU_EXP_IP,
    EXP.DATU_EXP_CLLI,
    EXP.DATU_EXP_PORT,
    EXP.DATU_EXP_NAME,
    EXP.DATU_EXP_CITY,
    EXP.DATU_EXP_STATE,
    EXP.DATU_EXP_SW_VERSION,
    DECODE(LAST_ALARM.LAST_ALARM_DATE, NULL, TO_CHAR(SYSDATE,'YYYY/MM/DD HH24:MI:SS'),
         TO_CHAR(LAST_ALARM.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS STATUS_DATE,
    DECODE(LAST_ALARM.ALARM_NAME, NULL, 'Disconnected', LAST_ALARM.ALARM_NAME) AS DATU_STATUS,
    DECODE(LAST_ALARM.ALARM_CLASS, NULL, 'OTHER', LAST_ALARM.ALARM_CLASS) AS IS_ERROR_STATUS,
         DECODE(LAST_RESOURCE.LAST_ALARM_DATE, NULL, '', TO_CHAR(LAST_RESOURCE.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS RESOURCE_STATUS_DATE,
         DECODE(LAST_RESOURCE.RESOURCE_CODE_NAME, NULL, '', LAST_RESOURCE.RESOURCE_CODE_NAME) AS RESOURCE_STATUS,
         DECODE(LAST_RESOURCE.RESOURCE_CODE_CLASS, NULL, '', LAST_RESOURCE.RESOURCE_CODE_CLASS) AS IS_RESOURCE_ERROR_STATUS,
         DECODE(LAST_OPER.LAST_ALARM_DATE, NULL, '', TO_CHAR(LAST_OPER.LAST_ALARM_DATE,'YYYY/MM/DD HH24:MI:SS')) AS OPER_STATUS_DATE,
         DECODE(LAST_OPER.OPER_CODE_NAME, NULL, '', LAST_OPER.OPER_CODE_NAME) AS OPER_STATUS,
         DECODE(LAST_OPER.OPER_CODE_CLASS, NULL, '', LAST_OPER.OPER_CODE_CLASS) AS IS_OPER_ERROR_STATUS,
    EXP.BEGIN_MAINT_WINDOW, RTU.RTU_NAME
    FROM TT_DATU_EXP_UNIT_INFO EXP
         left outer join
    SELECT distinct alarmed_datus.EXP_ID, c.ALARM_NAME, c.ALARM_TYPE, c.ALARM_CLASS, alarmed_datus.LAST_ALARM_DATE
    FROM ( SELECT EXP_ID, MAX(ALARM_TIME) AS LAST_ALARM_DATE FROM TT_DATU_EXP_ALARM_INFO GROUP BY EXP_ID) alarmed_datus
         inner join TT_DATU_EXP_ALARM_INFO b on b.EXP_ID = alarmed_datus.EXP_ID AND b.ALARM_TIME = alarmed_datus.LAST_ALARM_DATE
         inner join TT_DATU_EXP_ALARM_TYPES c on b.ALARM_TYPE = c.ALARM_TYPE ) LAST_ALARM on EXP.EXP_ID = LAST_ALARM.EXP_ID
         left outer join
              ( SELECT distinct a.EXP_ID, c.RESOURCE_CODE_NAME, c.RESOURCE_CODE_TYPE, c.RESOURCE_CODE_CLASS, a.LAST_ALARM_DATE
              FROM ( SELECT EXP_ID, MAX(RESOURCE_CODE_TIME) AS LAST_ALARM_DATE
              FROM TT_DATU_EXP_RESOURCE_CODE_INFO GROUP BY EXP_ID ) a
         inner join TT_DATU_EXP_RESOURCE_CODE_INFO b on b.EXP_ID = a.EXP_ID AND b.RESOURCE_CODE_TIME = a.LAST_ALARM_DATE
         inner join TT_DATU_EXP_RESOURCECODE_TYPES c on b.RESOURCE_CODE_TYPE = c.RESOURCE_CODE_TYPE) LAST_RESOURCE on EXP.EXP_ID = LAST_RESOURCE.EXP_ID
         left outer join
              ( SELECT distinct a.EXP_ID, c.OPER_CODE_NAME, c.OPER_CODE_TYPE, c.OPER_CODE_CLASS, a.LAST_ALARM_DATE
              FROM ( SELECT EXP_ID, MAX(OPER_CODE_TIME) AS LAST_ALARM_DATE
              FROM TT_DATU_EXP_OPER_CODE_INFO GROUP BY EXP_ID ) a
         inner join TT_DATU_EXP_OPER_CODE_INFO b on b.EXP_ID = a.EXP_ID AND b.OPER_CODE_TIME = a.LAST_ALARM_DATE
         inner join TT_DATU_EXP_OPER_CODE_TYPES c on b.OPER_CODE_TYPE = c.OPER_CODE_TYPE
              ) LAST_OPER on EXP.EXP_ID = LAST_OPER.EXP_ID ORDER BY EXP.DATU_EXP_CLLI
    inner join TT_RTU_TYPES RTU ON EXP.RTU_TYPE_ID = RTU.RTU_TYPE_ID
    WHERE EXP.IS_PRIMARY_ADDRESS like p_isPrimary;
    this one not work kept giving me errors:
    [ ORA-00933: SQL command not properly ended
    Any guru can help? I need to have this resolved end of today.
    Thanks in advance.

    Hi,
    Never write, let alone post, unformatted code.
    Indent the code so that it's easy to set the scope of sub-queries, and the majoc clauses (SELECT, FROM, WHERE, ORDER BY, ...) in each.
    When posting any formatted text on this site, type these 6 characters:
    \(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.
    If you do that to the code you posted, you'll see that it ends like this:... inner join     TT_DATU_EXP_OPER_CODE_INFO     b on b.EXP_ID     = a.EXP_ID
                                       AND      b.OPER_CODE_TIME = a.LAST_ALARM_DATE
         inner join      TT_DATU_EXP_OPER_CODE_TYPES      c on      b.OPER_CODE_TYPE = c.OPER_CODE_TYPE
         ) LAST_OPER          on EXP.EXP_ID = LAST_OPER.EXP_ID
    ORDER BY EXP.DATU_EXP_CLLI
    inner join TT_RTU_TYPES RTU     ON EXP.RTU_TYPE_ID = RTU.RTU_TYPE_ID
    WHERE EXP.IS_PRIMARY_ADDRESS      like p_isPrimary
    You can't put an ORDER BY clause  in the middle of the FROM clause.
    The ORDER BY clause always goes after the WHERE clause, like this:... inner join     TT_DATU_EXP_OPER_CODE_INFO     b on b.EXP_ID     = a.EXP_ID
                                       AND      b.OPER_CODE_TIME = a.LAST_ALARM_DATE
         inner join      TT_DATU_EXP_OPER_CODE_TYPES      c on      b.OPER_CODE_TYPE = c.OPER_CODE_TYPE
         ) LAST_OPER          on EXP.EXP_ID = LAST_OPER.EXP_ID
    inner join TT_RTU_TYPES RTU     ON EXP.RTU_TYPE_ID = RTU.RTU_TYPE_ID
    WHERE EXP.IS_PRIMARY_ADDRESS      like p_isPrimary
    ORDER BY EXP.DATU_EXP_CLLI

  • CS5: "join" causes points to change position

    What setting have I changed: Just today, when I try to move a point, "unite" shapes, or join 2 points, the points jump out of position. I've turned off "snap to grid" and "align new objects to pixel grid". I've even copied and pasted the artwork into a new "default" file. I've also restarted AICS5. I'm sure I'm overlooking something obvious. Any ideas?
    Thanks,
    Ray

    They don't remain in the same XY position, they move slightly.
    I copied a piece of the artwork into a new, default "Print" file and it did the same thing, but if I create new artwork in the same file, the points remain in the correct position. I'm confused. I even copied the artwork into a AICC file (using the clipboard) and have the same problem. It seems to be a preference linked to the artwork itself.

  • Big ANE causes java.lang.OutOfMemoryError when packaging Air application

    Hi,
    I'm working on an Air mobile game that uses ANE on iOS and Android. I'm in the process of creating a new ANE and face a problem on the Android side.
    My ANE requires an external framework (Burstly, http://burstly.com). If I just link the Android project to Burstly's .jar file, I get errors in "adb logcat", like:
    I/dalvikvm(16074): Could not find method com.burstly.lib.BurstlySdk.init, referenced from method com.freshplanet.burstly.functions.InitBurstlyFunction.call
    In order to include Burstly's files in my own .jar, I unzip Burstly's .jar file and repackage them with my compiled code in a unique .jar (following advice on http://stackoverflow.com/questions/7732742/air-3-native-extensions-for-android-can-i-how-t o-include-3rd-party-libraries).
    Problem: Burstly's SDK includes thousands of files. It doesn't create any trouble when packaging the ANE, but when I try to package the Air application, I get the following error:
    dx tool failed:
    UNEXPECTED TOP-LEVEL ERROR:
    java.lang.OutOfMemoryError: Java heap space
              at com.android.dx.util.IntList.<init>(IntList.java:87)
              at com.android.dx.rop.code.RopMethod.calcPredecessors(RopMethod.java:174)
              at com.android.dx.rop.code.RopMethod.labelToPredecessors(RopMethod.java:95)
              at com.android.dx.ssa.back.IdenticalBlockCombiner.process(IdenticalBlockCombiner.java:74)
              at com.android.dx.ssa.back.SsaToRop.convert(SsaToRop.java:132)
              at com.android.dx.ssa.back.SsaToRop.convertToRopMethod(SsaToRop.java:76)
              at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:103)
              at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:74)
              at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:269)
              at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:131)
              at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:85)
              at com.android.dx.command.dexer.Main.processClass(Main.java:299)
              at com.android.dx.command.dexer.Main.processFileBytes(Main.java:278)
              at com.android.dx.command.dexer.Main.access$100(Main.java:56)
              at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:229)
              at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:244)
              at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:130)
              at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108)
              at com.android.dx.command.dexer.Main.processOne(Main.java:247)
              at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183)
              at com.android.dx.command.dexer.Main.run(Main.java:139)
              at com.android.dx.command.dexer.Main.main(Main.java:120)
              at com.android.dx.command.Main.main(Main.java:89)
    I read that the solution to eliminate this error is to give Java the parameters "-Xms...M -Xmx...M", with "..." being a high-enough number. Note that I'm working on a machine with 8GB of RAM. I tried to package the app in command line to be able to pass these parameters:
    /usr/bin/java -Xms512M -Xmx4096M -jar "/Applications/Adobe Flash Builder 4.6/sdks/4.6.0air31/lib/adt.jar" -package -target apk -storetype pkcs12 -keystore [...].p12 Main.apk Main-app.xml Main.swf -extdir "/Users/alex/Documents/Adobe Flash Builder 4.6/.metadata/.plugins/com.adobe.flexbuilder.project.ui/ANEFiles/front-end-mobile/com.ado be.flexide.multiplatform.ios.platform"
    But when I run a "ps -ef | grep java", I can see that adt runs another Java program (dx) without transmitting my -Xms -Xmx parameters:
    /usr/bin/java -jar /Applications/Adobe Flash Builder 4.6/sdks/4.6.0air31/lib/android/bin/dx.jar --dex --output=/private/var/folders/t9/3kw74cx14nv2xg9tgmx9m1jc0000gp/T/b5757d93-1e93-439c-8f6d -c93e4933f6f1/outputDEX.dex [... bunch of jars]
    Any idea to solve this issue?
    Thanks
    Alex

    I solved my issue by setting the _JAVA_OPTIONS environment variable. (Note: there are two underscores)
    I added the following line to my .bash_profile:
    export _JAVA_OPTIONS="-Xms1024m -Xmx4096m -XX:MaxPermSize=512m"
    Now everytime a Java program is launched from the command line, I see the following message:
    Picked up _JAVA_OPTIONS: -Xms1024m -Xmx4096m -XX:MaxPermSize=512m
    And my application packaging runs just fine now.
    I still have an issue though: this trick solved the problem for packaging the app from the command line, but the _JAVA_OPTIONS are not picked up when packaging from Flash Builder, so it still crashes there.
    Note that my Adobe Flash Builder 4.6.ini contains the following options:
    -Xms512m
    -Xmx1676m
    -XX:MaxPermSize=512m
    -XX:PermSize=64m
    1676m is the highest number I can put before Flash Builder refuses to launch. I'm not sure if these parameters are actually passed to the VM that runs de dx.jar program, or if it's just for the ActionScript compiler. But anyway my app packaging still crashes in Flash Builder.
    If someone knows a way to force Flash Builder to pickup the _JAVA_OPTIONS set in the command line, let me know :-)
    Thanks
    Alex

  • XMLTable join causes parallel query not to work

    We have a large table, a column stores xml data as binary xmltype storage, and XMLTABLE query is used to extract the data.
    If we just need to extract data into a column, and the data has no relation with other data columns, XMLTABLE query is super fast.
    Once the data has parent -> children relationship with other columns, the query becomes extremely slow. From the query plan, we could observe that the parallel execution is gone.
    I can reproduce the problem with the following scripts:
    1. Test scripts to setup
    =============================
    -- Test table
    drop table test_xml;
    CREATE table test_xml
    ( period date,
    xml_content xmltype)
    XMLTYPE COLUMN xml_content STORE AS SECUREFILE BINARY XML (
    STORAGE ( INITIAL 64K )
    enable storage in row
    nocache
    nologging
    chunk 8K
    parallel
    compress;
    -- Populate test_xml table with some records for testing
    insert into test_xml (period, xml_content)
    select sysdate, xmltype('<?xml version = "1.0" encoding = "UTF-8"?>
    <searchresult>
    <hotels>
    <hotel>
    <hotel.id>10</hotel.id>
    <roomtypes>
    <roomtype>
    <roomtype.ID>20</roomtype.ID>
    <rooms>
    <room>
    <id>30</id>
    <meals>
    <meal>
    <id>Breakfast</id>
    <price>300</price>
    </meal>
    <meal>
    <id>Dinner</id>
    <price>600</price>
    </meal>
    </meals>
    </room>
    </rooms>
    </roomtype>
    </roomtypes>
    </hotel>
    </hotels>
    </searchresult>') from dual;
    commit;
    begin
    for i in 1 .. 10
    loop
    insert into test_xml select * from test_xml;
    end loop;
    commit;
    end;
    select count(*) from test_xml;
    -- 1024
    2. Fast query. Only extract room_id info, the plan shows parallel execution. The performance is very good.
    =================================================================
    explain plan for
    select *
    from test_xml,
    XMLTABLE ('/searchresult/hotels/hotel/roomtypes/roomtype/rooms/room'
    passing xml_content
    COLUMNS
    room_id varchar2(4000) PATH './id/text()'
    ) a;
    select * from table(dbms_xplan.display());
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 8364K| 15G| 548 (1)| 00:00:07 | | | |
    | 1 | PX COORDINATOR | | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10000 | 8364K| 15G| 548 (1)| 00:00:07 | Q1,00 | P->S | QC (RAND) |
    | 3 | NESTED LOOPS | | 8364K| 15G| 548 (1)| 00:00:07 | Q1,00 | PCWP | |
    | 4 | PX BLOCK ITERATOR | | | | | | Q1,00 | PCWC | |
    | 5 | TABLE ACCESS FULL| TEST_XML | 1024 | 2011K| 2 (0)| 00:00:01 | Q1,00 | PCWP | |
    | 6 | XPATH EVALUATION | | | | | | Q1,00 | PCWP | |
    3. The slow query. To extract room_id plus meal ids, no parallel execution. Performance is vert bad.
    ==============================================================
    -- One room can have multiple meal ids
    explain plan for
    select *
    from test_xml,
    XMLTABLE ('/searchresult/hotels/hotel/roomtypes/roomtype/rooms/room'
    passing xml_content
    COLUMNS
    room_id varchar2(4000) PATH './id/text()'
    , meals_node xmltype path './meals'
    ) a,
    XMLTABLE ('./meals/meal'
    passing meals_node
    COLUMNS
    meals_ids varchar2(4000) PATH './id/text()'
    ) b;
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 68G| 125T| 33M (1)|112:33:52 |
    | 1 | NESTED LOOPS | | 68G| 125T| 33M (1)|112:33:52 |
    | 2 | NESTED LOOPS | | 8364K| 15G| 676 (1)| 00:00:09 |
    | 3 | TABLE ACCESS FULL| TEST_XML | 1024 | 2011K| 2 (0)| 00:00:01 |
    | 4 | XPATH EVALUATION | | | | | |
    | 5 | XPATH EVALUATION | | | | | |
    Is the xml binary storage designed to only solve non-parent-children relationships data?
    I would hightly appreciate if someone could help.

    This problem has been confirmed as an oracle bug, currently the bug is not fixed yet.
    Bug 16752984 : PARALLEL EXECUTION NOT WORKING WITH XMLTYPE COLUMN

  • Re org tablespace

    Hi Guys,
    My tabelspace is fragmented, so I'm trying to de-frag by issuing this command:
    Alter tablespace TBS_NAME coalesce;
    But after I've ran it and then refreshed the Tablespace Map. I don't see the changes. Does one have to run it more that once to see the changes?
    Thanx.

    I doubt if it is possible.I don't see any attachment option.
    WHat is your issue?Why do youwant to reorganise the tablespace?
    How big is the tablespace?If it is locally managed,you don't need to.
    -We are experiencing performance issues on the appalication side. The Team that did an investigation suggetsted that the problem was caused by fragmantation of the tablespace. ( I did an analysis on the tablespace but no space management issues were detected.
    -The tablesace is locally managed and is about 10G
    -I've got free blocks scattered all over even if I coalesce free extents I don't see them joining forming one big chunk.
    Thanx.

  • Refresh Analytic Workspace increase tablespace size ?

    Hello, I'm using CWM2 OLAP objects with 9.2.0.6 database version. I have analytic workspaces, too.
    I execute refresh of my analytic workspace, using the script generated by Analytic Workspace Manager 9.2.0.4.1. I have noticed that, strangely, whenever I execute the script, it increases the used space of the tablespace containing analytic workspace.
    Is this usual ? The script doesn't include new objects, and the effect is the same. What is failing ? I think that this should not happen.
    Any idea ? Thanks a lot by your help

    vbarres wrote:
    if my datawarehouse
    grows dinamically (I'm adding new objects every day),
    what size has to have my aw tablespace ? It's
    difficult to reserve space on disk for this...I'm confused here. Are you really adding new objects every day, or just adding more data to the existing ones. In general, if you are adding more data, the growth should be reasonably predictable, assuming a reasonable AW design. The usual growth confusion has to do with reloading the same data over again, where the growth is unexpected.
    rhaug wrote:
    I had a similar problem in my AW. The temp and the aw tablespace got ridiculously big. >The temp tablespace exceeded 20gb before the disk got full and everything crashed.
    In my case I had made several cubes to perform different tests. When I deleted all but >one cube things behaved more normal and the aggregation time was stable.Again, I'm not sure what is going on here from the descriptions. As far as I know, there should not be a problem with multiple cubes in the same AW. You don't say what version you are in, but is it possibly version 10? I could see some interaction in a parallel build, where the various cubes are built simultaneously, and thus the pages on the free list are necessarily available for reuse when you might want them. More details would be interesting here. You might also try a serial build (I'm not sure where the switch is in AWM, but I'm sure it's there somewhere).

  • Combinig two tables without a JOINER

    Hi,
    I got a little problem. I want to combine two tables. But I don't need a joiner, cause that would give me wrong combined values.
    I have a table working_hours which has the actual working hours of employees. And a table times_absent which has the absent time of the employees.
    Both tables have the date and actual hours of work and absent besides other tings in it.
    I simple want to combine these two tables to one big table.
    Example:
    Table working_hours :
    Name --- Date --- Hours -- ....
    Mr.A --- 2.5.2011 --- 8 --- ...
    Mr.B --- 2.5.2011 --- 6 --- ...
    Mr.C --- 2.5.2011 --- 7 --- ...
    Table times_absent:
    Name --- Date --- Days --- ....
    Mr.A --- 3.5.2011 --- 1 --- ...
    Mr.B --- 3.5.2011 --- 2 --- ...
    Mr.B --- 4.5.2011 --- 2 --- ...
    New Table:
    Name --- Date --- Working_Hours --- Absent_Days ---
    Mr.A --- 2.5.2011 --- 8 ---- null - or some sort of dummy value XXX
    Mr.A --- 3.5.2011 --- null --- 1 ---
    Mr.B --- 2.5.2011 --- 6 --- null ---
    Mr.B --- 3.5.2011 --- null --- 2 ---
    Mr.B --- 4.5.2011 --- null --- 2 ---
    and so one.
    Is there a possibility in the OWB to perform such task?
    thx

    As MccM says you need to make the columns in your IN groups the same i.e. the same structure as your new table.
    Connect WORKING HOURS to IN GROUP 1 and ABSENT to IN GROUP 2 for the columns that exist.
    Create a numeric CONSTANT with a default value of NULL and connect that to ABSENT_DAYS in IN GROUP 1 and WORKING_HOURS in IN GROUP 2.
    You don't need to change the structure of your tables.

  • Change Tablespace of a table with LONG column

    I have a 9i database that I have just gotten control of. At this point there is just one big dictionary managed tablespace for everything created by users. I am trying to move to multiple locally managed tablespaces with fixed extent sizes but I have run into a problem.
    I have one table with one LONG datatype column. Apparently there is a huge amount of work involved to change the code if I make it a BLOB so that is out.
    At this point I would like to change the tablespace of this table but I can't move it the normal way because of the LONG column. I have found mention of being able to do this with "COPY" but I can't find any documentation on the "COPY" command in the 9i Docs.
    Any help would be appreciated,
    Chris S.

    Chris-
    Can't you create your new table ahead of time in your new tablespace?
    You could then use a statement like:
    COPY FROM old/your_password@olddb TO new/your_password@newdb -
    REPLACE NEWTABLE -
    USING SELECT * FROM OLDTABLE;

  • Copy Folder with Joins. Export/Import Folder with joins. In EUL.

    Ok, I've got a custom folder which has been made up by dragging items from 2 or 3 other folders into it.
    It then has some joins of it's own, quite a few.
    When trying to create a workbook from it, it takes 9 mins to run a query.
    I need to work out what is slowing it down. If I create the same workbook against the folder which has the majority of the items in the custom folder, it runs instantly.
    So I suspect it is one of the joins causing it.
    My plan was to duplicate the folder, then remove joins until I find out which one is causing it.
    However, if I cut n paste the folder, I get a copy without the joins.
    If I export the folder and import it I get a copy without the joins.
    Question then - how can I get a copy of a folder WITH the joins ?
    I'm slightly concerned that when I export my EUL from the dev database and import it into the live database that I'm not going to get any joins since the export and import into the dev database is not retaining the joins.
    Anyone ?

    Hi,
    The preferences for Disco Plus are set in the pref.txt file on the apps server and for Disco Desktop in the Windows Registry. I think the defaults are set on so unless you have changed them this is unlikely to help.
    I think I read somewhere that the 11g optimiser will remove unused outer joins or where there is a foreign key constraint. I may have made that last bit up as I cannot find a reference to it, but it may be worth exploring.
    To speed things up you could look at why this join is slowing things down. It could be that you need an index on the join column.
    The join actually is used, in that it has to check in the other table that a record exists. This is why Discoverer cannot remove the join from the complex folder query. If it did and there where no matching records in the other table then you would get a different result.
    Rod West

  • Bad Performance of Merge Join

    We are on ASE 15.0.3/EBF 21284 ESD#4.3 working on a application with over 3000 stored procedures.
    Our server optimization goal is allrows_mix.
    The Merge-Join is giving us problems. When a query uses Merge-Join, it usually take an order of magnitude longer to run than if we force it
    to use the other types of joins - nested-loop, n-ary-nested-loop, or hash-join.
    The query plan shows sorting on worktables leading into the merge-join.
    I know I can disable it with "set merge_join off", or "set plan optgoal allrows_oltp", but I'd rather not if I can fix the problem instead.
    Question: Are there configuration options that would help merge-join?
    I've done variations of this:
    sp_configure "number of sort buffers", 32000
    I've also done variations of this in the proc:
    set parallel_degree 5
    set scan_parallel_degree 4
    When I run the following command, I see sort buffer starvation:
    1> sp_monitorconfig "sort buffers"
    2> go
    Usage information at date and time: Apr 24 2014  2:31PM.
    Name                            Num_free    Num_active  Pct_act   Max_Used    Reuse_cnt
    number of sort buffers                0               82045   100.00           82045                  0
    (1 row affected)
    (return status = 0)
    Maybe there are other configuration option to help merge-joins? Any ideas?
    Thanks.

    Well, I'm gonna have to emphatically disagree with your comment ...
    "regressing back to allrows_oltp setting to solve your performance problems should not be encouraged"
    For *EVERY* client I've worked with on migrating from ASE 12.5.x to ASE 15.x ... they all had the same objective ... get through the migration as quickly as possible and do not degrade the performance of our database queries.  Unfortunately for every client I've worked with ... ASE 15.x, and the default of allrows_mix, did just the opposite, ie, migrations took much longer than expected/planned due primarily to huge performance degradation across their SQL inventory.
    For most of my clients merge joins were rarely, if ever, used in ASE 12.5.x.  And since hash joins never existed, that leaves us with using nested loop joins in ASE 15.x in an attempt to stay as close to ASE 12.5.x in terms of performance.
    NOTE: No, I don't consider compatibility mode as a solution as this requires you go through 2 migrations ... once to compatibility mode ... and eventually once to get off of compatibility mode.
    Now, can merge joins improve the performance of *some* queries?  Absolutely, but in practice ... especially with the first 4-5 years of ASE 15.x releases ... merge joins caused more headaches and performance degradation than they were worth.  I've seen too many clients spend huge amounts of time trying to re-write code to work with merge joins, often failing and having to 'regress back' to nested loop joins in the end.
    Unfortunately a) Sybase delivered ASE 15.x with allows_mix as the default and b) most companies didn't have enough migration experience to understand the pitfalls of trying to run all of their queries under the default of allrows_mix.  This meant that many companies were left having to 'regress back' to alternative solutions (eg, allrows_oltp, compat mode, don't migrate, move to another RDBMS) to address the performance degradation introduced with ASE 15.x and the default setting of allrows_mix.

  • Regarding select join query

    hey guys,
    I have below inputs for the screen
          ZWADAT_IST -Actual goods movement date
          ZKUNNR- shipto party
          ZVGBEL-
          ZVBELN-
          ZPOSNR-
          ZBUNKATU-
    Depends on the input, i have to retrieve necessary fields from various related tables and display in screen .
    ZSDTB_LOT is the add on table where we can access few fields as below. Now i want to write QUERY using join to fetch the fields from different tables like LIKP,LIPS,ZSDTB_LOT,ADRC according to the inputs
    and display in table control.
    I am not good in writing SELECT JOIN statement to retrieve fields in effecient way. could somebody help me
    giving equavalent code for this...
    <b>
    Select
          LIKP-WADAT_IST,
          LIKP-VSTEL(shipping point)
          LIKP-KUNNR
          LIKP-LFART(delievery type)
          LIPS-VKBUR
          LIPS-MATNR
          LIPS-VGBEL
          ZSDTB_LOT-VBELN
          ZSDTB_LOT-BUNKATSU
          ZSDTB_LOT-POSNR
          ZSDTB_LOT-LOTB
          ZSDTB_LOT-LGMNG
          ADRC-NAME1
    FROM ZSDTB_LOT,LIKP,LIPS,ADRC
    WHERE WADAT_IST IN ZWADAT_IST AND
          KUNNR IN ZKUNNR AND
          VGBEL IN ZVGBEL AND
          VBELN IN ZVBELN AND
          POSNR IN ZPOSNR AND
          BUNKATU IN ZBUNKATU.</b>
    ambichan.

    I will try to avoid a big join on so many tables. Instead I will do something like this.
    DATA: BEGIN OF deliveries OCCURS 0,
            vbeln     LIKE likp-vbeln,
            lfart     LIKE likp-lfart,
            wadat_1st LIKE likp-wadat_1st,
            vstel     LIKE likp-vstel,
            kunnr     LIKE likp-kunnr,
            posnr     LIKE lips-posnr,
            matnr     LIKE lips-matnr,
            vkbur     LIKE lips-vkbur,
            vgbel     LIKE lips-vgbel.
    DATA: END OF deliveries.
    DATA: BEGIN OF ztab_entries OCCURS 0,
            vbeln    LIKE zsdtb_lot-vbeln,
            posnr    LIKE zsdtb_lot-posnr,
            lotb     LIKE zsdtb_lot-lotb,
            lgmng    LIKE zsdtb_lot-lgmng,
            bunkatsu LIKE zsdtb_lot-bunkatsu.
    DATA: END OF ztab_entries.
    SELECT likp~vbeln likp~lfart likp~wadat_1st
           likp~vstel likp~kunnr lips~posnr
           lips~matnr lips~vkbur lips~vgbel
      FROM likp as likp INNER JOIN lips as lips
        ON likp~vbeln = lips~vbeln
      INTO TABLE deliveries
    WHERE lips~vbeln IN zvbeln
       AND lips~posnr IN zposnr.
    IF NOT deliveries[] IS INITIAL.
      DELETE deliveries WHERE NOT
            ( wadat_1st IN zwadat_1st AND
              kunnr     IN zkunnr     AND
              vgbel     IN zvgbel ).
    ENDIF.
    SELECT vbeln posnr lotb
           lgmng bunkatsu
      FROM zsdtb_lot FOR ALL ENTRIES IN deliveries
      INTO TABLE ztab_entries
    WHERE vbeln = deliveries-vbeln
       AND posnr = deliveries-posnr.
    IF NOT ztab_entries[] IS INITIAL.
      DELETE ztab_entries WHERE NOT bunkatu IN zbunkatu.
    ENDIF.
    *-- Once we have these two internal tables then we can
    *   prepare the final table by looping through them
    SORT deliveries BY vbeln posnr.
    SORT ztab_entries BY vbeln posnr.
    LOOP AT deliveries.
      CLEAR ztab_entries.
      READ TABLE ztab_entries WITH KEY vbeln = deliveries-vbeln
                                       posnr = deliveries-vbeln
                            BIANRY SEARCH.
      SELECT SINGLE name1 INTO final_tab-name1
                          FROM KNA1
                         WHERE kunnr = deliveries-kunnr.
      MOVE-CORRESPONDING: deliveries   TO final_tab,
                          ztab_entries TO final_tab.
      APPEND final_tab.
      CLEAR final_tab.
    ENDLOOP.
    This way you will be selecting records from the database with the index of the primary key and then you can manipulate the selected entries as you wish.
    Hope this helps,
    Srinivas

Maybe you are looking for

  • FS10N cumulative value Table and Fields

    Hi, FS10N balances, cumulative balances in which table it will store this values. I am not able to see cumulative balances in BKPF or BSEG. In which table we can see the cumulativie values by period wise for FS10N. Thanks srinu

  • I have 2 apple IDs and I want to transfer iTunes gift card funds

    I want to transfer the money from my gift card that is "redeemed" on one of my apple IDs to my other apple ID. Is that possible? The funds haven't even been used at all, I just want them on my more current apple ID that I have associated with my new

  • Solaris 8 on this config?

    I'd like to install Solaris 8 on a new pc configuration that I just put together. The system is stable with correct BIOS settings. The machine consists of: 1. ASUS A7V-133 motherboard. 2. AMD Thunderbird [266] 1.2 ghz processor. A Promise HD controll

  • How do I turn on the GUI controls for the Focus Blur Effect in Final Cut Pro X

    I can't find the setting that enables the GUI control overlay to manipulate the parameters of the Focus blur effect in Final Cut Pro X. When I have the clip and the focus effect selected I only get a center element overlay. At one point I had control

  • Problem starting SMS message with iphone5

    I have just got a new iPhone 5 (my first iPhone!), and seem to have an issue with messaging. I was doing some checks such as making calls, sending e-mails etc... And found that I can send iMessages to other apple products, however I cannot send a mes