Best way of partitioning the huge size table in oracle 11gr2

hi,
OS: linux
DB: oracle 11gR2
table size is about 2T and single table space with multiple dbf files in ASM,
table partition with dbms.redefinition is running past 2 days (range partition).. its running but not that fast..
note: exchange partition is too slow.. hence not used..
what is the best way of doing the huge size table partition. please suggest, thanks.

>
what is the best way of doing the huge size table partition
>
A few questions
1. Is that an OLTP or OLAP table?
2. Is all of the data still needed online or is some of it archivable?
3. What type of partitiioning are you doing? RANGE? LIST? COMPOSITE?
4. Why do you you say 'exchange partition is too slow' - did you do a test?
For example if data will be partitioned by create date then one stragety is to initially put all existing data into the root partition of a range partitioned table. Then you can start using new partitions for new incoming data and take your time to partition the current data.

Similar Messages

  • Best way to import data to multiple tables in oracle d.b from sql server

    HI All am newbie to Oracle,
    What is the Best way to import data to multiple tables in Oracle Data base from sql server?
    1)linked server?
    2)ssis ?
    If possible share me the query to done this task using Linked server?
    Regards,
    KoteRavindra.

    check:
    http://www.mssqltips.com/sqlservertip/2011/export-sql-server-data-to-oracle-using-ssis/
          koteravindra     
    Handle:      koteravindra 
    Status Level:      Newbie
    Registered:      Jan 9, 2013
    Total Posts:      4
    Total Questions:      3 (3 unresolved)
    why so many unresolved questions? Remember to close your threads marking them as answered.

  • The best way to find the relationship in table especially for PS

    Hi All,
    How the fastest way to find the relationship between PS table with another table.
    For example table PROJ, PRPS with another table in FICO module...let say the table that store the transaction CJR2...WBS element link to Cost element or activity type...and material on it...
    please help.
    Cheers,
    Nies

    go to se38 select any report then click radio button attribute and double on logical data base write PSJ as logical data base for Project and select display structure. You will get all dependant table name in relevant sequence

  • What is the best way to partitioning Macbook Air 13" 2012 solid state drive

    What is the best way to partitioning Macbook Air 13" 2012 solid state drive.

    Tech,
    You don't provide enough information onto which anyone could reasonably formulate an answer.
    As mentioned, you don't indicate the circumstances that would warrant consideration of multiple partitions. Moreover, you also don't indoicate the size of the SSD in question.
    Like Fred said, ordinarily you leave it alone as one. Some people like to keep data and the OS/apps separate, but it is for specific reasons.

  • What is the best way to verify default heap size in Java

    Hi All,
    What is the best way to verify default heap size in Java ? does it vary over JVM to JVM . I was reading this article http://javarevisited.blogspot.sg/2011/05/java-heap-space-memory-size-jvm.html , and it says default size is 128 MB but When I run following code :
    public static void main(String args[]) {
    int MB = 1024*1024;
    System.out.println(Runtime.getRuntime().totalMemory()/MB);
    It print "870" i.e. 870 MB.
    I am bit confused, what is the best way to verify default heap size in any JVM ?
    Edited by: 938864 on Jun 5, 2012 11:16 PM

    938864 wrote:
    Hi Kayaman,
    Sorry but I don't agree with you on verification part, Why not I can verify it ? to me default means value when I don't specify -Xms and -Xmx and by the way I was testing that program on 32 bit JRE 1.6 on Windows. I am also curious significant difference between 128MB and 870MB I saw, do you see anything obviously wrong ?That spec is outdated. Since Java 6 update 18 (Sun/Oracle implementation) the default maximum heap space is calculated based on total memory availability, but never more than 1GB on 32 bits JVMs / client VMs. On a 64 bits server VM the default can go as high as 32gb.
    The best way to verify ANYTHING is to address multiple sources of information and especially those produced by the source, not some page you find on the big bad internet. Even wikipedia is a whole lot better than any random internet site IMO. That's common sense, I can't believe you put much thought into it that you have to ask in a forum.

  • Hi all! What is the best way to create the correct space for baseball jersey names and numbers? along with making sure they are the right size for large printing.

    What is the best way to create the correct space for baseball jersey names and numbers? along with making sure they are the right size for large printing.

    Buying more hard drive space is a very valid option, here.  Editing takes up lots of room, you should never discount the idea of adding more when you need it.
    Another possibility is exporting to MXF OP1a using the AVC-I codec.  It's not lossless, but it is Master quality.  Plus the file size is a LOT smaller, so it may suit your needs.

  • What is the best way to get the end of record from internal table?

    Hi,
    what is the best way to get the latest year and month ?
    the end of record(KD00011001H 1110 2007  11)
    Not KE00012002H, KA00012003H
    any function for MBEWH table ?
    MATNR                 BWKEY      LFGJA LFMON
    ========================================
    KE00012002H        1210             2005  12
    KE00012002H        1210             2006  12
    KA00012003H        1000             2006  12
    KD00011001H        1110             2005  12
    KD00011001H        1110             2006  12
    KD00011001H        1110             2007  05
    KD00011001H        1110             2007  08
    KD00011001H        1110             2007  09
    KD00011001H        1110             2007  10
    KD00011001H        1110             2007  11
    thank you
    dennis
    Edited by: ogawa Dennis on Jan 2, 2008 1:28 AM
    Edited by: ogawa Dennis on Jan 2, 2008 1:33 AM

    Hi dennis,
    you can try this:
    Sort <your internal_table MBEWH> BY lfgja DESCENDING lfmon DESCENDING.
    Thanks
    William Wilstroth

  • What is the best way to find the size of files?

    What is the best way to find the size of my files?

    Select a file or folder in the Finder and choose Get Info from the File menu.
    (125056)

  • HT1198 I shared disk space and my iPhoto library as described in this article. When creating the disk image, I thought I had set aside enough space to allow for growth (50G). I'm running out of space. What's the best way to increase the disk image size?

    I shared disk space and my iPhoto library as described in this article. When creating the disk image, I thought I had set aside enough space to allow for growth (50G). I'm running out of space. What's the best way to increase the disk image size?

    Done. Thank you, Allan.
    The sparse image article you sent a link to needs a little updating (or there's some variability in prompts (no password was required) with my OS and/or Disk Utility version), but it worked.
    Phew! It would have been much more time consuming to use Time Machine to recover all my photos after repartitioning the drive. 

  • Is there a way of partitioning the data in the cubes

    Hello BPC Experts,
    we are currently running an Appset with 4 Applciations. Anyway two of these are getting really big.
    In BPC for MS there is a way to partitioning the data as I saw in the How tos.
    In NW Versions the BPC queries the Multiprovider. Is there a way to split the underlying Basis Cube to several (split by time or Legal Entity).
    I think this would help to increase the speed a lot as data could be read in parallel.
    Help is very much appreciated.
    Daniel
    Edited by: Daniel Schäfer on Feb 12, 2010 2:16 PM

    Hi Daniel,
    The short answer to your question is that, no, there is not a way to manually partition the infocubes at the BW level. The longer answer comes in several parts:
    1. BW automatically partitions the underlying database tables for BPC cubes based on request ID, depending on the BW setting for the cube and the underlying database.
    2. BW InfoCubes are very different from MS SQL server cubes (ROLAP approach in BW vs. MOLAP approach usually used in Analysis Services cubes). This results in BW cubes being a lot smaller, reads and writes being highly parallel, and no need for a large rollup operation if the underlying data changes. In other words, you probably wouldn't gain much from semantic partitioning of the BW cubes underlying BPC, except possibly in query performance, and only then if you have very high data volumes (>100 million records).
    3. BWA is an option for very large cubes. It is expensive, but if you are talking 100s of millions of records you should probably consider it. It uses a completely different data model than ROLAP or MOLAP and it is highly partition-able, though this is transparent to the BW administrator.
    4. In some circumstances it is useful to partition BW cubes. In the BW world, this is usually called "semantic partitioning". For example, you might want to partition cubes by company, time, or category. In BW this is currently supported through manually creating several basic cubes under a multiprovider. In BPC, this approach is not supported. It is highly recommended to not change the BPC-generated Infocubes or Queries in any way.
    5. If you have determined that you really need to semantically partition to manage data volumes in BPC, the current best way is probably to have multiple BPC applications with identical dimensions. In other words, partition in the application layer instead of in the data layer.
    Hopefully that's helpful to you.
    Ethan

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • Best way to get data from multiple table

    hi
    i would like to know which is the best way of getting the data in the final table from multiple read statements which are inside loop.
    for exm
    loop at itab.
    read ....
    read....
    read....
    read ....
    data into final_itab
    endloop.
    thanx
    manoj

    Hi.....
    Say we are having two data base tables.. ZMODEL1 and ZMODEL2...
    Now decalre intrenal tables and work areas and before that structures for these two and also declare one final output table for display the data...
    >types: begin of ty_model1,
    >       za(10),
    >       zb type netwr,
    >       zc(10),
    >       zd(10),
    >       ze(10),
    >       zf(10),
    >       end of ty_model1,
    >       begin of ty_model2,
    >       za1(10),
    >       zb1(10),
    >       zc1(10),
    >       zd1(10),
    >       za(10),
    >       end of ty_model2,
    >       begin of ty_output,
    >       za(10),
    >       zb type netwr,
    >       zc(10),
    >       zd(10),
    >       ze(10),
    >       zf(10),
    >       za1(10),
    >       zb1(10),
    >       zc1(10),
    >       zd1(10),
    >       end of ty_output.
    >
    >data: t_model1 type standard table of ty_model1 initial size 0,
    >      t_model2 type standard table of ty_model2 initial size 0,
    >      t_output type standard table of ty_output initial size 0,
    >      w_model1 type ty_model1,
    >      w_model2 type ty_model2,
    >      w_output type ty_output.
    Now in the start of selection.. event...
    >select <field names in the same order as in database table> from zmodel1 into table t_model1 where za in s_comp. (s_comp is select-option for that field)>
    >if sy-subrc = 0.
    >select <field names in the same order as in database table> from zmodel2 into table t_model2 for all entries in t_model1 where za = >t_model1-za.
    >endif.
    After that now fill the final output table...
    >loop at t_model1 into w_model1.
    >  w_output-za = w_model1-za.
    >  w_output-zb = w_model1-zb.
    >  w_output-zc = w_model1-zc.
    >  w_output-zd = w_model1-zd.
    >  w_output-ze = w_model1-ze.
    >  w_output-zf = w_model1-zf.
    >
    >read table t_model2 into w_model2 with key za = w_model1-za.
    >if sy-subrc = 0.
    >  w_output-za1 = w_model2-za1.
    >  w_output-zb1 = w_model2-zb1.
    >  w_output-zc1 = w_model2-zc1.
    >  w_output-zd1 = w_model2-zd1.
    >endif.
    > append w_output to t_output.
    > clear w_output.
    > end loop.
    and now display the final out table...
    This is the best way..
    Thanks,
    Naveen.I

  • Best way to set up a custom table using dates ytd, quarters, months

    Hello-
    I did post this on the crystal forum however it really involves setting up a good structured table in order to get the data to report on which I think we need to change which is why I'm posting here.
    I am not a dba but I work with crystal reports and we are working together to get data in tables that we can report on.  We are in the process of creating a data warehouse, which will mainly be summarized data we are exporting out of our legacy system and importing into a mysql database.  Most of this data will be summarized by month, quarter and year.  We will have multiple years of data.  A lot of the reports we will be creating will be in a comparison manner such as 2009 vs 2008 or Jan this year compared to Jan last year or list out sales by month Jan-Dec 2009.  I would like this data to be easily displayed on a report in a side by side manner.  To get this result, what is the best way to structure the data in the tables on a monthly, quarterly and yearly basis?  Right now weu2019ve got one field in the table called date (which is a string) which is listed like:
    Date
    2008YTD
    2009YTD
    2009Jan
    2008Jan
    Is it best to break out the date information so that on the report side it will be easier to work with?  Also should this be set up in the table as a date instead of a string?  If so how do you account for a YTD date?  Are we going to need 2 dates, a start and end date to achieve ytd or qtd information?  Do you recommend creating just a date table and if so how would that be structured?
    So for reporting purposes, using crystal reports, I would like to display comparison data on a report side by side, for this example this year goals compared to last years goals by goal code A-Z (which is a credit code, goals are for the # of credits by code for the year).  The end result I would like is to look like this:
    code   2009 goal   2008 goal
    A        25              20
    B        50              60
    C        10              15
    However the data looks like this (displaying all of the 2009 data first then the 2008 data, not side by side which is how it is in the table):
    code   2009 goal   2008 goal
    A        25
    B        50
    C        10
    etc to Z
    A                          20
    B                          60
    C                          15
    Right now the data is structured in the table like:
    Code  Goal  Date (this is currently a string in the db)
    A        25     YTD 2009
    B        50     YTD 2009
    etc. A-Z for 2009 then:
    A        20      YTD 2008
    B        60      YTD 2008
    Any thoughts on strucuting a table would be appreciated.  thanks.

    Jennifer,
    Most of the DW examples I've seen use a dimDateTime table in the database. That table has multiple columns related to the specific time... For example, here are the columns that are in the, SQL Server sample database, "AdventureWorkdDW"... "DimTime" table
    COLUMN_NAME             COLUMN_INFO
    TimeKey               (int, not null)
    FullDateAlternateKey     (datetime, null)
    DayNumberOfWeek          (tinyint, null)
    EnglishDayNameOfWeek     (nvarchar(10), null)
    SpanishDayNameOfWeek    (nvarchar(10), null)
    FrenchDayNameOfWeek     (nvarchar(10), null)
    DayNumberOfMonth     (tinyint, null)
    DayNumberOfYear          (smallint, null)
    WeekNumberOfYear     (tinyint, null)
    EnglishMonthName     (nvarchar(10), null)
    SpanishMonthName     (nvarchar(10), null)
    FrenchMonthName          (nvarchar(10), null)
    MonthNumberOfYear     (tinyint, null)
    CalendarQuarter          (tinyint, null)
    CalendarYear          (char(4), null)
    CalendarSemester     (tinyint, null)
    FiscalQuarter          tinyint, null)
    FiscalYear          (char(4), null)
    FiscalSemester          (tinyint, null)
    Then all of the fact table receive their date stamps by linking back to this table, using the TimeKey as a foreign key.
    HTH,
    Jason

  • Is there any way to increase the font size in the Creative Cloud. Using a Lenovo Yoga 2 with 14" screen and care barely see the menus

    Is there any way to increase the font size in the Creative Cloud. Using a Lenovo Yoga 2 with 14" screen and care barely see the menus.

    In short, no. There is no fix yet. They only barely acknowledge there is a problem. The "fix" is to enable 200% UI scaling which makes everything comically huge and unusable. This is pretty much of a joke. A really, really EXPENSIVE joke.

  • I am moving from PC to Mac.  My PC has two internal drives and I have a 3Tb external.  What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data

    I am moving from PC to Mac.  My PC has two internal drives and I have a 3Tb external.  What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data

    Paragon even has non-destriuctive conversion utility if you do want to change drive.
    Hard to imagine using 3TB that isn't NTFS. Mac uses GPT for default partition type as well as HFS+
    www.paragon-software.com
    Some general Apple Help www.apple.com/support/
    Also,
    Mac OS X Help
    http://www.apple.com/support/macbasics/
    Isolating Issues in Mac OS
    http://support.apple.com/kb/TS1388
    https://www.apple.com/support/osx/
    https://www.apple.com/support/quickassist/
    http://www.apple.com/support/mac101/help/
    http://www.apple.com/support/mac101/tour/
    Get Help with your Product
    http://docs.info.apple.com/article.html?artnum=304725
    Apple Mac App Store
    https://discussions.apple.com/community/mac_app_store/using_mac_apple_store
    How to Buy Mac OS X Mountain Lion/Lion
    http://www.apple.com/osx/how-to-upgrade/
    TimeMachine 101
    https://support.apple.com/kb/HT1427
    http://www.apple.com/support/timemachine
    Mac OS X Community
    https://discussions.apple.com/community/mac_os

Maybe you are looking for

  • Inventory Error :Transaction quantity must be =  available quantity.

    An error occurred while relieving reservations.|Transaction quantity must be less than or equal to available quantity. while doing material transactions in transaction open interface iam getting above error......Kindly help me as i am struck here and

  • Calculation of depreciation for IT purpose

    Hi guru Presently we are in 6.0 version.Book dep is caculated as a normal process.Now my client wants to calculate dep. for IT in sap.What are the additional configuration setting required for doing that. Thanks Shivaji

  • How do I ativate word wrap or line wrap for viewing or printing e-mails?

    When I attempt to print e-mails that have lines of words that extend across the page, foxfire cuts off that lines that are over a certain number of characters. In Explorer this dose not happen, they wrap the over long line to the next line below. The

  • Quicktime/FinalCutPro7/iMovie11 Crash

    Hey Guys, So this must be a Quicktime 7 related bug since the update to 10.6.6. I can't launch Quicktime 7 or anything that uses Quicktime 7. I have iMovie 09 and iMovie 11 and neither will launch. FCP 7.0.3 won't launch. Well to be accurate, they la

  • SEM BCS-BW Reports development

    Hi All, I need help and suggestion from your side . We have X -reports in SAP-BW whcih are actual & planning cubes based . We have  Reporting & Total cubes in separate system (SEM-BCS) So req is that we need to develop Y-new reports in SEM-BCS as it