Difference betwen srvctl relocate scan and scan_listener?

Hi all,
I have a 2 node RAC cluster using 11.2.0.2 Grid Infrastructure.
I am trying to get clear in my mind what the difference is between relocating the SCAN using srvctl relocate scan and SCAN_LISTENER using srvctl relocate scan_listener
Can anyone provide me with an explanation of when you would use each command and the effect it has?
Many thanks.

Hi,
see here for a closer explanation of SCAN:
http://www.oracle.com/technetwork/database/clustering/overview/scan-129069.pdf
Regarding your questions: The difference between a SCAN VIP and a normal RAC VIP, is that the RAC VIP has a node it want's to run on and each node has one (whereas you only have 3 SCANs). If it fails over to another node, the normal VIP exists, but does not accept connections, whereas the SCAN is not fix to a node and can run on any node in the cluster (and will accept connections anytime).
Now that this works, the SCAN VIP will always move with the SCAN listener (otherwise it would not make any sense). Hence there is really no difference in moving the SCAN VIP (because this will trigger a relocate of the listener) or to move the SCAN_Listener (since this will move the VIP it depends on).
Regards
Sebastian

Similar Messages

  • Difference betwen object currency,area and transaction currency

    Hello all
    what is the difference betwen object currency,area and transaction currency.
    and frome where i can change it?
    Thanks

    Check this link for currencies in Controlling:
    http://help.sap.com/saphelp_erp60_sp/helpdata/en/02/30793485231774e10000009b38f83b/frameset.htm
    By understanding their definition from the above link, am sure you'll also understand the difference (or the purpose) of them.
    Changing the currencies, after the transaction data posted is not easy, even if possible in some cases. I'm sure this question of yours is in academic interest, rather than a project need.
    Hope this helps.

  • Difference betwen loading a repository and connecting to a repository

    Hi,
    In the MDM console i see 2 options.
    1) Load Reposiotry
    2) Connect repository.
    I mount the mdm server. then i mount the mdm reposiotry and connect to it.
    After connecting i am able to log into the data import manager and data manager and choose the repositury.
    Why is loading the repository required?
    Thanks,
    Vignesh

    Hi Vingnesh,
    Loading the repository is required to login to all MDM clients such as Data Manager ,Import Manager and Syndicator.
    Suppose if repository is unloaded position, then it is required to load the repository to login to MDM clients.
    I think in your case, repository was not in unloaded position, so you do not have to load the repository as it was already loaded.
    Suppose if you stop and start the server, then loading of the repository is required.
    Thanks,
    Narendra

  • What is the difference between number, numeric, dec and decimal

    hello
    i want to ask what is the difference betwen number, numeric, dec and decimal data types ?
    in this paper http://www.techonthenet.com/oracle/datatypes.php
    i see that there is no difference ?
    what is the trick here - why there are 4 type with one purpose?

    Generally speaking Oracle has specific names for their own data types. Additionally they support the ANSI names (as well as others) for some data types which are then converted to the equivalent Oracle data type. See the documentation for details.
    ANSI, DB2, and SQL/DS Data Types

  • Difference betwen Null item,Blank item ,item is zero and empty item

    Please describe the difference betwen :
    null item
    empty item
    blank item
    and zero item
    And how can we test each one?
    How to test if the value of an item say Px_y is null or
    blank
    empty
    zero
    Regards

    Please describe the difference betwen :
    null item
    empty item
    blank item
    and zero item
    And how can we test each one?
    How to test if the value of an item say Px_y is null or
    blank
    empty
    zero
    Regards

  • Optimiser difference : a max() sometimes doing a backward scan and sometimes forward scan

    I interested why we get 2 different query plans on Sybase 15.5 and 15.7 for similar queries
    declare @MID int
    select @MID = 25717
    select MID , max(Date) from CB where MID = @MID group by MID
    select @MID, max(Date) from CB where MID = @MID
    The index is on (MID, Date).
    The first does forward scan and the second does a backward scan.
    With our data, the first query does 8000 page reads (with a SAN read costing 4ms = 32s)
    and the second query does 4 page reads (with a SAN read costing 4ms = 16ms)
    1)
    |   |   |GROUP SORTED Operator (VA = 1)
    |   |   |  Evaluate Grouped MAXIMUM AGGREGATE.
    |   |   |
    |   |   |   |SCAN Operator (VA = 0)
    |   |   |   |  FROM TABLE
    |   |   |   |  CB
    |   |   |   |  Index : IDX1
    |   |   |   |  Forward Scan.
    |   |   |   |  Positioning by key.
    |   |   |   |  Index contains all needed columns. Base table will not be read.
    |   |   |   |  Keys are:
    |   |   |   |    MID ASC
    |   |   |   |  Using I/O Size 16 Kbytes for index leaf pages.
    |   |   |   |  With LRU Buffer Replacement Strategy for index leaf pages.
    2)
    |   |  Evaluate Ungrouped MAXIMUM AGGREGATE.
    |   |  Scanning only up to the first qualifying row.
    |   |
    |   |   |SCAN Operator (VA = 0)
    |   |   |  FROM TABLE
    |   |   |  CB
    |   |   |  Index : IDX1
    |   |   |  Backward scan.
    |   |   |  Positioning by key.
    |   |   |  Index contains all needed columns. Base table will not be read.
    |   |   |  Keys are:
    |   |   |    MID ASC
    |   |   |  Using I/O Size 2 Kbytes for index leaf pages.
    |   |   |  With LRU Buffer Replacement Strategy for index leaf pages.
    Why does the optimiser generate different plans ?

    Okay - so the one reason I like to see plancost vs. just showplan...you can see in the one without the group by clause that it does a ScalarAgg vs. a GroupSorted ala:
    (without group by clause)
    ==================== Lava Operator Tree ====================
                            Emit                               
                            (VA = 2)                           
                            r:1 er:1                           
                            cpu: 0                             
                ScalarAgg                                      
                  Max                                          
                (VA = 1)                                       
                r:1 er:1                                       
                cpu: 0                                         
    IndexScan                                                  
    IDX1                                                       
    (VA = 0)                                                   
    r:1 er:1                                                   
    l:3 el:3                                                   
    p:3 ep:3                                                   
    ============================================================
    (with group by clause)
    ==================== Lava Operator Tree ====================
                                        Emit                    
                                        (VA = 3)                
                                        r:1 er:8                
                                        cpu: 300                
                            Restrict                            
                            (0)(0)(0)(4)(0)                     
                            (VA = 2)                            
                            r:1 er:8                            
                GroupSorted                                     
                Grouping                                        
                (VA = 1)                                        
                r:1 er:8                                        
    IndexScan                                                   
    IDX1                                                        
    (VA = 0)                                                    
    r:353666 er:158986                                          
    l:863 el:386                                                
    p:176 ep:78                                                 
    ============================================================
    Now the key to the problem is in the GroupSorted function.
    But, let's step back a bit.   Query optimization is just one of the many (and nearly final) steps prior to query execution.  Prior to query optimization we have the TDS Receive buffers, the SQL parsing, the SQL normalization and pre-processing.   In the pre-processing, we normally try to construct what is called the LOP (Logical Operator) tree.   Note that no-where in that sequence is any execution - I mention this because once before you commented on someone else's posting about a purported optimization problem with
    select * from table where column=<value> or 1=2
    ....the notion of evaluating 1=2 wouldn't happen until the execution phase - so optimization will not consider it - in this case it looks it at as an expression to be evaluated and skips it.   If you want to see what I mean do a showplan and statistics io on
    select * from sysobjects where 1=2
    and you will see that due to lack of sargs we do a tablescan from the optimizer due to lack of sargs...but it is during query execution that we short circuit and hence the 0 LIO's.    I point that out because there are different phases involved and in this case it is the pre-processor that selects the overall LOP tree to start with....but then the optimizer has to consider which LOPs to use underneath that such as GroupSorted (using an index) vs. GroupSorting (sort operation), etc.
    Soooo....in the GROUP BY example, it starts with:
    The Lop tree:
    ( project
            ( group
                    ( scan CB
    vs. your other example (without the Group By):
    The Lop tree:
    ( project
            ( scalar
                    ( scan CB
    ....so you can see the difference right off the bat in what the optimizer received from the pre-processor.   Like I pointed out earlier, the problem *started* the second you put the group by clause in - before the optimizer even got to look at.
    You then get 2 optimization blocks....the innermost for the (scan CB) and then the outer for (group (inner block)) ala:
    OptBlock1
            The Lop tree:
            ( scan CB
            Generic Tables: ( Gtt1( CB ) Gti2( IDX1 ) )
            Generic Columns: (  Gc0(CB ,Rid) Gc1(CB ,MID) Gc2(CB ,ValuationDate))  Virtual Generic Columns: ( )
            Predicates: ( { CB.MID Gtg0.Gc4 }  = 0 tc:{1} )
            Transitive Closures: ( Tc0 = {  Gc0(CB ,Rid)} Tc1 = {  Gc1(CB ,MID) Gc4(Gtg0 ,_gcelement_4)} Tc2 = {  Gc2(CB ,ValuationDate)} )
    OptBlock0
            The Lop tree:
            ( pseudoscan
            Generic Tables: ( Gtg0 )
            Generic Columns: (  Gc3(Gtg0 ,_gcelement_3) Gc4(Gtg0 ,_gcelement_4) Gc5(Gtg0 ,_virtualagg) Gc6(Gtg0 ,_virtualagg))  Virtual Generic Columns: ( )
            Predicates: ( { CB.MID Gtg0.Gc4 }  = 0 tc:{1} )
            Transitive Closures: ( Tc3 = {  Gc3(Gtg0 ,_gcelement_3) Gc5(Gtg0 ,_virtualagg) Gc6(Gtg0 ,_virtualagg)} Tc1 = {  Gc1(CB ,MID) Gc4(Gtg0 ,_gcelement_4)} )
    The question is, why did we estimate 8 rows returned from the group by????   And would that change the behavior???
    That comes from this section:
            ( PopGroupSorted
                    proj: {{ CB.MID Gtg0.Gc4 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } }
                    pred: [Tc{} Pe{{ CB.MID Gtg0.Gc4 }  = 0}]
                    subs: {CB.MID ,Gtg0._virtualagg }
                    cost: 66305.12
                    I/O estimate : [
                            rowcount=7.388771
                            averagewidth=19
                            pages=1
                            prefetchpages=1
                            lio=0 pio=0 cpu=158985.8
                            total lio=385.3457 total pio=77.60534 total cpu=635943
                            tempdb=0
                    Cache Strategy: [
                            prefetch=YES
                            iosize=131072 Bytes
                            bufreplace=LRU
                    order: none
                    ( PopIndScan index: Gti2( IDX1 )
                                    table: Gtt1( CB )
                            proj: {{ CB.MID Gtg0.Gc4 } ,{ CB.ValuationDate } }
                            pred: [Tc{} Pe{{ CB.MID Gtg0.Gc4 }  = 0}]
                            subs: {CB.MID ,CB.ValuationDate }
                            cost: 50406.55
                            I/O estimate : [
                                    rowcount=158985.8
                                    averagewidth=14
                                    pages=383.3457
                                    prefetchpages=383.3457
                                    lio=385.3457 pio=77.60534 cpu=476957.2
                                    scanlio=385.3457 scanpio=77.60534 scancpu=476957.2
                                    tempdb=0
                            Cache Strategy: [
                                    prefetch=YES
                                    iosize=131072 Bytes
                                    bufreplace=LRU
                            order: none )
    Now to understand what is happening, let's take the bad example:
    select MID , max(Date) from CB where MID = @MID group by MID
    Now, let's make it a bit more generic:
    select type, count(*), max(crdate)
    from sysobjects
    where type='P'
    group by type
    If run in master on my machine, I get:
    type                                            
    P             20             Oct  2 2014 12:23PM
    ...as a result, with an initial LOP of:
    The Lop tree:
    ( project
    ( group
      ( scan sysobjects
    Which should be no surprise....with no index on type...or due to the quantity of rows in sysobjects, we simply go for a tablescan.   But we have our (group ) LOP.....and if we look at the plancost:
    ==================== Lava Operator Tree ====================
                                        Emit                   
                                        (VA = 3)               
                                        r:1 er:2               
                                        cpu: 0                 
                            Restrict                           
                            (0)(0)(0)(4)(0)                    
                            (VA = 2)                           
                            r:1 er:2                           
                GroupSorted                                    
                Grouping                                       
                (VA = 1)                                       
                r:1 er:2                                       
    TableScan                                                  
    sysobjects                                                 
    (VA = 0)                                                   
    r:20 er:19                                                 
    l:8 el:8                                                   
    p:0 ep:2                                                   
    ============================================================
    We see sort of the same problem....we are estimating 2 rows to be returned from the group by. 
    Stepping back a bit, remember, if we don't have any stats on a column, we assume 0.1 for =, 0.25 for bounded range (between) and 0.3 for unbounded range.   In my case, with no stats on type in sysobjects in master, we can see this clearly in the index selectivity area of:
    The table (Datarows) has 188 rows, 6 pages,
    Data Page Cluster Ratio 0.9999900
        type = 'P'
       Estimated selectivity for type,
            selectivity = 0.1,
        Search argument selectivity is 0.1.
        using table prefetch (size 32K I/O)
        Large IO selected: The number of leaf pages qualified is > MIN_PREFETCH pages
        in data cache 'default data cache' (cacheid 0) with LRU replacement
    In the index selectivity due to using the magic values.   In the group by costing, we see:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc5 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } ,{ Gtg0.Gc4 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
      subs: {sysobjects.type ,Gtg0._virtualagg ,Gtg0._virtualagg }
      cost: 65.44175
      I/O estimate : [
       rowcount=1.88
       averagewidth=21
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=18.8
       total lio=8 total pio=1.00007 total cpu=244.4
      Cache Strategy: [
       prefetch=YES
       iosize=4096 Bytes
       bufreplace=LRU
      order: none
      ( PopTabScan table: Gtt1( sysobjects )
       proj: {{ sysobjects.type Gtg0.Gc5 } ,{ sysobjects.crdate } }
       pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
       subs: {sysobjects.type ,sysobjects.crdate }
       cost: 63.56175
       I/O estimate : [
       rowcount=18.8
        averagewidth=12
        pages=8
        prefetchpages=8
        lio=8 pio=1.00007 cpu=225.6
        scanlio=8 scanpio=1.00007 scancpu=225.6
       Cache Strategy: [
        prefetch=YES
        iosize=32768 Bytes
        bufreplace=LRU
       order: none )
    Now....to go from the row count to the estimated rows for a group by, we don't use the column selectivity - we use column density stats.  The real reason behind the 0.1 is that we are saying is that there likely is only 10 distinct values.   For example, if I update statistics sysobjects (type) in my system, the output changes to:
    ==================== Lava Operator Tree ====================
                                        Emit                   
                                        (VA = 3)               
                                        r:1 er:1               
                                        cpu: 0                 
                            Restrict                           
                            (0)(0)(0)(4)(0)                    
                            (VA = 2)                           
                            r:1 er:1                           
                GroupSorted                                    
                Grouping                                       
                (VA = 1)                                       
                r:1 er:1                                       
    TableScan                                                  
    sysobjects                                                 
    (VA = 0)                                                   
    r:20 er:21                                                 
    l:8 el:8                                                   
    p:0 ep:2                                                   
    ============================================================
    Which is based on:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc5 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } ,{ Gtg0.Gc4 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
      subs: {sysobjects.type ,Gtg0._virtualagg ,Gtg0._virtualagg }
      cost: 65.80175
      I/O estimate : [
       rowcount=1
       averagewidth=21
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=20
       total lio=8 total pio=1.00007 total cpu=248
      Cache Strategy: [
       prefetch=YES
       iosize=4096 Bytes
       bufreplace=LRU
      order: none
      ( PopTabScan table: Gtt1( sysobjects )
       proj: {{ sysobjects.type Gtg0.Gc5 } ,{ sysobjects.crdate } }
       pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc5 }  = 'P'}]
       subs: {sysobjects.type ,sysobjects.crdate }
       cost: 63.80175
       I/O estimate : [
        rowcount=20
        averagewidth=12
        pages=8
        prefetchpages=8
        lio=8 pio=1.00007 cpu=228
        scanlio=8 scanpio=1.00007 scancpu=228
    Note that my overall rowcount went up a bit....but the group by went to 1 row....if we look at the column stats via optdiag:
    Statistics for column:                                                          "type"
    Last update of column statistics:                                               Feb 15 2015  9:18:32:850PM
         Range cell density:                                                        0.0053191489361702
        Total density:                                                             0.4216274332277049
         Range selectivity:                                                         default used (0.33)
         In between selectivity:                                                    default used (0.25)
         Unique range values:                                                       0.0053191489361702
         Unique total values:                                                       0.2000000000000000
         Average column width:                                                      default used (2.00)
         Rows scanned:                                                              188.0000000000000000
         Statistics version:                                                        4
    Histogram for column:                                                           "type"
    Column datatype:                                                                char(2)
    Requested step count:                                                           20
    Actual step count:                                                              9
    Sampling Percent:                                                               0
    Tuning Factor:                                                                  20
    Out of range Histogram Adjustment is DEFAULT.                                  
    Low Domain Hashing.                                                            
         Step     Weight                    Value
            1     0.00000000       <=       "EJ"
            2     0.00531915        <       "P "
            3     0.10638298        =       "P "
            4     0.00000000        <       "S "
            5     0.30319148        =       "S "
            6     0.00000000        <       "U "
            7     0.56382978        =       "U "
            8     0.00000000        <       "V "
            9     0.02127660        =       "V "
    ....so the 20 rows are due to the slightly higher 0.10638 vs. 0.1...
    Now....if we know the value for the where clause and have stats on the column, I notice we always estimate 1 row to be returned from the group by.   However, if we don't have stats - or if we don't know the value - we estimate more than one row.   For example if I declare @var for type ala:
    declare @type char(2)
    select @type='P'
    select type, max(crdate)
    from sysobjects
    where type=@type
    group by type
    go
    I then get:
    ( PopGroupSorted
      proj: {{ sysobjects.type Gtg0.Gc4 } ,{ Gtg0.Gc3 Gtg0._virtualagg Gtg0._virtualagg } }
      pred: [Tc{} Pe{{ sysobjects.type Gtg0.Gc4 }  = '}]
      subs: {sysobjects.type ,Gtg0._virtualagg }
      cost: 83.58154
      I/O estimate : [
      rowcount=4.17777
       averagewidth=17
       pages=1
       prefetchpages=1
       lio=0 pio=0 cpu=79.26596
       total lio=8 total pio=1.00007 total cpu=425.7979
    ...if we have stats on type vs. the 1.88 when we don't.   Which points out that the uniqueness stats and selectivity are likely considered in the final cost of the group by.   In your case, the estimate of:
                    I/O estimate : [
                            rowcount=7.388771
                            averagewidth=19
                            pages=1
                            prefetchpages=1
                            lio=0 pio=0 cpu=158985.8
                            total lio=385.3457 total pio=77.60534 total cpu=635943
                            tempdb=0
    ...is likely due to the column uniqueness and selectivity stats based on the @MID variable
    Note that despite we now know the GROUP BY is only going to return a single row, we still don't change the plan.....we still hit the index in ascending fashion.   The reason why is that we don't assume (or should I say we don't make a special case) that the only aggregate is a max().   It could be count(), avg(), min() or a combination of which - any of could require reading every index row other than strictly min().   Could we do so??  Possibly.....however, arguable, the select max() without the group by is the more correct SQL to use....   We do (on the other hand) have special processing for scalaragg with max() as it does the backwards scan if max() is the only aggregate....if you do select min(), max() - then it does a forward scan even with scalaragg.   If you have just min(), we do a forward scan and short circuit after first row (as if select top 1)....

  • RAC: "srvctl config scan" output

    Hi,
    My company is using a RAC on Oracle 11gR2.
    I am trying to undersand the beast and there is something I am stucked on.
    we have this :
    [root@devrac1 admin]$ srvctl config scan
    Nom SCAN : <host>.<domain>, réseau : 1/10.12.2.0/255.255.255.0/eth0
    Nom d'adresse IP virtuelle SCAN : scan1, IP : /10.12.2.206/10.12.2.206
    Nom d'adresse IP virtuelle SCAN : scan2, IP : /10.12.2.207/10.12.2.207
    Nom d'adresse IP virtuelle SCAN : scan3, IP : /10.12.2.208/10.12.2.208
    this output looks like the output described in this document in ORACLE SUPPORT ID 975457.1
    I read  this whitepaper : http://www.oracle.com/technetwork/products/clustering/overview/scan-129069.pdf
    And the output is more like this.
    [root@<host> admin]$ srvctl config scan
    Nom SCAN : <host>, réseau : 1/10.12.2.0/255.255.255.0/eth0
    Nom d'adresse IP virtuelle SCAN : scan1, IP : /<host>.<domain>/10.12.2.206
    Nom d'adresse IP virtuelle SCAN : scan2, IP : /<host>.<domain>/10.12.2.207
    Nom d'adresse IP virtuelle SCAN : scan3, IP : /<host>.<domain>/10.12.2.208
    Why is it different ? What does it means ?
    Thanks for your help.

    Hi, thanks for the quick answer !
    I think my post is a little misleading. (i was too busy obfuscating)
    In the first documentation it's the same adress twice
    Nom d'adresse IP virtuelle SCAN : scan1, IP : /10.12.2.206/10.12.2.206
    In the second documentation
    Nom d'adresse IP virtuelle SCAN : scan1, IP : /<dns name for the adress scan>/10.12.2.206
    I dont understand the meaning of this difference.

  • What are differences between the target tablespace and the source tablespac

    The IMPDP command create so manay errors. But the EXAMPLE tablespace is transported to the target database successfully. It seems that the transported tablespace is no difference with the source tablespace.
    Why create so many errors?
    How to avoid these errors?
    What are differences between the target tablespace and the source tablespace?
    Is this datapump action really successfull?
    Thw following is the log output:
    [oracle@hostp ~]$ impdp system/oracle dumpfile=user_dir:demo02.dmp tablespaces=example remap_tablespace=example:example
    Import: Release 10.2.0.1.0 - Production on Sunday, 28 September, 2008 18:08:31
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Master table "SYSTEM"."SYS_IMPORT_TABLESPACE_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_TABLESPACE_01": system/******** dumpfile=user_dir:demo02.dmp tablespaces=example remap_tablespace=example:example
    Processing object type TABLE_EXPORT/TABLE/TABLE
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    CREATE TABLE "OE"."CUSTOMERS" ("CUSTOMER_ID" NUMBER(6,0), "CUST_FIRST_NAME" VARCHAR2(20) CONSTRAINT "CUST_FNAME_NN" NOT NULL ENABLE, "CUST_LAST_NAME" VARCHAR2(20) CONSTRAINT "CUST_LNAME_NN" NOT NULL ENABLE, "CUST_ADDRESS" "OE"."CUST_ADDRESS_TYP" , "PHONE_NUMBERS" "OE"."PHONE_LIST_TYP" , "NLS_LANGUAGE" VARCHAR2(3), "NLS_TERRITORY" VARCHAR2(30), "CREDIT_LIMIT" NUMBER(9,2), "CUST_EMAIL" VARCHAR2(30), "ACCOUNT_MGR_ID" NU
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    CREATE TABLE "IX"."ORDERS_QUEUETABLE" ("Q_NAME" VARCHAR2(30), "MSGID" RAW(16), "CORRID" VARCHAR2(128), "PRIORITY" NUMBER, "STATE" NUMBER, "DELAY" TIMESTAMP (6), "EXPIRATION" NUMBER, "TIME_MANAGER_INFO" TIMESTAMP (6), "LOCAL_ORDER_NO" NUMBER, "CHAIN_NO" NUMBER, "CSCN" NUMBER, "DSCN" NUMBER, "ENQ_TIME" TIMESTAMP (6), "ENQ_UID" VARCHAR2(30), "ENQ_TID" VARCHAR2(30), "DEQ_TIME" TIMESTAMP (6), "DEQ_UID" VARCHAR2(30), "DEQ_
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "SH"."CUSTOMERS" 9.850 MB 55500 rows
    . . imported "SH"."SUPPLEMENTARY_DEMOGRAPHICS" 695.9 KB 4500 rows
    . . imported "OE"."PRODUCT_DESCRIPTIONS" 2.379 MB 8640 rows
    . . imported "SH"."SALES":"SALES_Q4_2001" 2.257 MB 69749 rows
    . . imported "SH"."SALES":"SALES_Q1_1999" 2.070 MB 64186 rows
    . . imported "SH"."SALES":"SALES_Q3_2001" 2.129 MB 65769 rows
    . . imported "SH"."SALES":"SALES_Q1_2000" 2.011 MB 62197 rows
    . . imported "SH"."SALES":"SALES_Q1_2001" 1.964 MB 60608 rows
    . . imported "SH"."SALES":"SALES_Q2_2001" 2.050 MB 63292 rows
    . . imported "SH"."SALES":"SALES_Q3_1999" 2.166 MB 67138 rows
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."REGIONS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."REGIONS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."COUNTRIES" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."COUNTRIES" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."LOCATIONS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."LOCATIONS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."DEPARTMENTS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."DEPARTMENTS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOBS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOBS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."EMPLOYEES" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."EMPLOYEES" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOB_HISTORY" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOB_HISTORY" TO "EXAM_03"
    ORA-39112: Dependent object type OBJECT_GRANT:"OE" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"OE" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    ORA-39112: Dependent object type INDEX:"OE"."CUSTOMERS_PK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_ACCOUNT_MANAGER_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_LNAME_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_EMAIL_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"PM"."PRINTMEDIA_PK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMER_CREDIT_LIMIT_MAX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMER_ID_MIN" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMERS_PK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"PM"."PRINTMEDIA__PK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"IX"."SYS_C005192" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUSTOMERS_PK" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_ACCOUNT_MANAGER_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_LNAME_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_EMAIL_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"PM"."PRINTMEDIA_PK" creation failed
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    ORA-39112: Dependent object type REF_CONSTRAINT:"OE"."CUSTOMERS_ACCOUNT_MANAGER_FK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39083: Object type REF_CONSTRAINT failed to create with error:
    ORA-00942: table or view does not exist
    Failing sql is:
    ALTER TABLE "OE"."ORDERS" ADD CONSTRAINT "ORDERS_CUSTOMER_ID_FK" FOREIGN KEY ("CUSTOMER_ID") REFERENCES "OE"."CUSTOMERS" ("CUSTOMER_ID") ON DELETE SET NULL ENABLE
    ORA-39112: Dependent object type REF_CONSTRAINT:"PM"."PRINTMEDIA_FK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    ORA-39112: Dependent object type INDEX:"OE"."CUST_UPPER_NAME_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_UPPER_NAME_IX" creation failed
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    ORA-39112: Dependent object type PROCACT_INSTANCE skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39083: Object type PROCACT_INSTANCE failed to create with error:
    ORA-01403: no data found
    ORA-01403: no data found
    Failing sql is:
    BEGIN
    SYS.DBMS_AQ_IMP_INTERNAL.IMPORT_SIGNATURE_TABLE('AQ$_ORDERS_QUEUETABLE_G');COMMIT; END;
    Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCDEPOBJ
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."AQ$_ORDERS_QUEUETABLE_V" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE_N" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE_R" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."AQ$_ORDERS_QUEUETABLE_E" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Job "SYSTEM"."SYS_IMPORT_TABLESPACE_01" completed with 63 error(s) at 18:09:14

    Short of trying to then reverse-engineer the objects that are in the dump file (I believe Data Pump export files contain some XML representations of DDL in addition to various binary bits, making it potentially possible to try to scan the dump file for the object definitions), I would tend to assume that the export didn't include those type definitions.
    Since it looks like you're trying to set up the sample schemas, is there a reason that you wouldn't just run the sample schema setup scripts on the destination database? Why are you using Data Pump in the first place?
    Justin

  • Diference betwen .mov[dv] files and .dv files

    There are some aplications (Quicktime one of then) that when save a .dv file imported from a miniDV camcorder create a .mov file (with dv inside) instead of a .dv file. This .mov file happens to be exactly twice the size the original .dv file.
    What is the diference betwen then?
    Is that .mov file in fact a dv file?
    Why is double size?
    To get the original container/format I have to chose export instead of save. Does this mean that the files are being reencoded even when the input and output formats are the same? Or do I have exactly the same quality even chosing exporting instead of save?
    Thanks. David Casillas

    1.- I import my miniDV tapes using imovie08 becouse I like the way it names the files. Then I copy the .dv files to another folder and delete the imovie events folder.
    I don't see anything wrong here -- although I might do things differently if the iMovie Event and your "other" folder are both on the same hard drive.
    2.- Then I do some spliting / joining of the files... Trying tonight other applications I think Mpegstreamclip will be the best option to deal with this files since it seems that doesn't internally convert to .mov the files. The problem seems that althogh the join is easy to do the split involves some more steps.
    Would also recommend MPEG Streamclip here. Unlike QT Pro, you can set in/out points and perform saves without having to perform a "trim" step in between. And, as you noted, the "Save As..." option does allow you save the file to either the original DV or an MOV file extension. Am not, however, sure if container is or is not actually changed. Have noted that storing H.264/AAC data in MOV file to an MP4 extension with MPEG Streamclip now generates the same "Description does not match" modal message as does changing the extension manually in the Finder. (SimpleMovieX, however, does actually scan and write the data to a "real" MP4 container in this situation.) In any event, believe you would have no problem using the MPEG Streamclip "Save As..." option to create a series of segments which, if they contain alpha-numeric sequential names, could later be opened as a multi-clip set and merged as a single file in MPEG Streamclip using the same "Save As..." option. Using the "Save As..." option in applications like QT Pro does, of course, automatically write the date to a "real" MOV container.

  • Pan-Scan and Letterbox??????

    What is the difference in the Display Mode of 16x9 Pan-Scan then 16x9 Pan-Scan and Letterbox???
    Thanks

    What you want to use is Letterbox, not Pan & Scan or Pan & Scan/Letterbox. (Using both allows the user to choose which one)
    From the manual (do a search for Pan-Scan for more info in the PDF/manual)
    The pan-scan method of displaying 16:9 video on a 4:3 Monitor was developed as a compromise between letterbox, which displays all the video content but with black bars at the top and bottom, and the only other alternative: filling the enitre 4:3 screen, but cropping some of the content.
    Broad stroke - Pan-Scan uses "vectors" to control what part of a video is shown.
    Final Cut and DVD SP cannot add the vectors and if you set the playback it may lead to results you do not want (i.e., only center part of the video being shown.)
    Practically if you shot 16:9 and really shot with 4:3 in mind, the outside areas may not be needed, but you need to look at the footage....

  • Difference between transaction SWNCONFIG/SWNADMIN and report RPTARQEMAIL

    Hello,
    I am new in ESS time management workflows. I have worked on workflows in other areas such as SD, MM & FI. In SD, MM workflows I used transaction SWNCONFIG/SWNADMIN to configure notification settings. But in ESS time management I have found that report RPTARQEMAIL is used to send notifications.
    Now I have to send notifications for ESS leave managment and CATS workflows. My question is that which option out of TCode SWNCONFIG or report RPTARQEMAIL I should use to configure notification settings for TM workflows. Also, please tell me difference betwen two ways.
    Thanks in advance.
    Regards,
    Vijay

    Vijay,
    RPTARQEMAIL  is specifically for leave request s
    check this link on SWNCONFIG http://help.sap.com/saphelp_nw04/helpdata/en/27/9a793426cea749b18fa409565c3dee/content.htm
    Thanks
    Bala Duvvuri

  • Difference Between Layer 2 Security and WPA & WPA2 Parameters

    Hello Everyone, thank you everyone to read me again.
    I have one questions about security WLAN configuration. I created one SSID when I configured the tab security I see this:
    I'm not an expert how you can see but I thought that when you use WPA + WPA2 you will use an PSK to join the network.
    I see I can use 802.1X in layer 2 security.
    I don't really understand what the difference betwen this options. I mean if  I use 802.1X for layer 2 security I need to stablish a WEP KEY?
    IF I use a WPA +WPA2 for layer 2 security and 802.1X for Auth Key Managment I need to use a Radius?
    Exist any documentation about this options?

    Look at it this way.
    WPA+TKIP
    Uses pre shared key and is not supported by 802.11n
    WPA2+AES
    Also uses pre shared key and is supported by 802.11n
    WPA+TKIP or WPA2+AES and 802.1x
    Requires a radius server or the use of LDAP
    Also requires a certificate
    Server side certificate for PEAP and a server and client side certificate for EAP-TLS.
    You also want to only use either WPA/TKIP or WPA2/AES, not both and don't mix and match.
    Sent from Cisco Technical Support iPhone App

  • What is the differences between Ant build jar and eclipse export jar ?

    I have tried the eclipse export method to generate a jar file, however, it doesn't work with my server. But when I write a ant build file to create jar file. It works!!
    Does any buddies know why?

    Difference betwen JAR AND WAR
    .jar files: These files are with the .jar extension. The .jar files contain the libraries, resources and accessories files like property files.
    .war files: These files are with the .war extension. The war file contains the web application that can be deployed on the any servlet/jsp container. The .war file contains jsp, html, javascript and other files for necessary for the development of web applications.
    Source:
    http://forum.java.sun.com/thread.jspa?threadID=485823&messageID=2273976
    The main difference between a JAR and a WAR is that a JAR is not really a single purpose format, while a WAR is. With a JAR file, you can package lots of different things. Usually you package up Java classes, but you could put pretty much anything into a JAR file, and you can create JARs that have whatever files and directory formats you want. For instance, Oracle (at least in 8.1.x) actually uses JAR files to distribute the Oracle software on the install CDs, and IBM uses them to distribute an installable version of the documentation for WebSphere. Technically you can put JSP pages, HTML pages, servlet classes, etc. in a JAR, but you generally don't -- you use a WAR.
    WAR files are JAR files with a specific format. This format is defined in the servlet spec (which you can look at on the java.sun.com site). Essentially, the WAR file is a standard format for web applications that has specific directories and specific files. This includes a WEB-INF directory, a WEB-INF/web.xml file used to describe the application, a WEB-INF/lib directory for JAR files used by the application, and a WEB-INF/classes directory for class files that aren't distributed in a JAR. You would put the pages (JSPs and HTML) in the WAR as well. Then, you can distribute your application as one file, instead of as a collection of images, HTML pages, and Java classes.
    Source:
    http://www.theserverside.com/discussions/thread.tss?thread_id=10905
    Message was edited by:
    lochdsmile

  • Differences Btw. Counter Gate and Source Channels

    I am working with a Flow Technology Turbine Flowmeter.  It has a 30mv-300mv sine wave signal.  I want to convert this signal into a usable pulse signal.  I am trying to look for information on connecting this signal to the counter channel of my PCI card.  What is the differences between the counter gate and source channels.  If anyone knows where I can get this information, please let me know. 

    Hm. You seem to be asking two different questions:
    1) Counter gate and source terminals.
    It would depend on the exact hardware, but the source terminal would generally be the one that causes the counter to increment/decrement its count. The gate would control whether the source is allowed to do that. Some counter chips use the gate for more sophisticated counter control.
    The best place to learn about counter/timer functions would probably be the NI-DAQmx Help that installs along with NI-DAQmx.
    2) How to convert 30-300 mV sine wave to pulses. I take it that its the frequency of the sine wave that you're after.
    This isn't a job for a counter. A counter requires a well-conditioned digital signal satisfying the rise-time and voltage levels of a digital signal.
    You probably need to scan your signal with an analog input and then post-process it to calculate the frequency. There are a number of ways to do that- the simplest would probably be to look for zero crossings in the analog signal. I don't know Labview, but I bet there's a canned vi to do just that.Message Edited by WM John Weeks on 12-13-2005 09:08 AM
    John Weeks
    WaveMetrics, Inc.
    Phone (503) 620-3001
    Fax (503) 620-6754
    www.wavemetrics.com

  • Difference b/w sy-index and sy-tabix

    hai all,
    Could u explain the difference b/w sy-index and sy-tabix?
    regards,
    Selva

    Hi,
    sy-index - For Loops, Current Loop Pass.
    sy-tabix - For Internal Table, Current Line Index in internal table.
    sy-tabix is used at internal table operations (READ TABLE ... LOOP AT ...), sy-index is used at cycles (DO...n TIMES).
    SY-TABIX:
    Current line of an internal table. SY-TABIX is set by the statements below, but only for index tables. The field is either not set or is set to 0 for hashed tables.
    it can be set by using the following:
    1.append
    2.collect
    3.loop at
    4. read
    5.search
    APPEND sets SY-TABIX to the index of the last line of the table, that is, it contains the overall number of entries in the table.
    COLLECT sets SY-TABIX to the index of the existing or inserted line in the table. If the table has the type HASHED TABLE, SY-TABIX is set to 0.
    LOOP AT sets SY-TABIX to the index of the current line at the beginning of each loop lass. At the end of the loop, SY-TABIX is reset to the value that it had before entering the loop. It is set to 0 if the table has the type HASHED TABLE.
    READ TABLE sets SY-TABIX to the index of the table line read. If you use a binary search, and the system does not find a line, SY-TABIX contains the total number of lines, or one more than the total number of lines. SY-INDEX is undefined if a linear search fails to return an entry.
    SEARCH <itab> FOR sets SY-TABIX to the index of the table line in which the search string is found.
    SY-INDEX:
    SY-INDEX is used to describe the number of iteration with in the DO..ENDDO, WHILE ...ENDWHILE....
    for example,
    DO.
    WRITE SY-INDEX.
    IF SY-INDEX = 3.
    EXIT.
    ENDIF.
    ENDDO.
    it gives output as:
    1 2 3
    Regards,
    Raj.

Maybe you are looking for

  • Mini DisplayPort to HDMI adapter with sound

    There are 2 Mini DisplayPort to HDMI adapters that combine sound into the HDMI signal, http://www.monoprice.com/products/product.asp?cid=104&cp_id=10428&cs_id=1042802&pid=5969&seq=1&format=2 (one that uses USB for the audio) and http://www.monoprice.

  • Multiple devices, one account

    Is it legal to purchase an app and put it on more than one device with the same account? If so, is there a way to transfer an app from an iphone to an ipod touch when there is no wifi to connect the ipod touch to the internet/app store? Thanks!

  • I create shortcut for user to get a cr but not working

    1. added report to Customer service folder 2. created shortcut in closure group, which is attached to Customer service. the new report is not appearing when I login as that user. why? I can see the report shortcut when viewing in CMC but not when act

  • Alv grid: editable individaul fields or editable rows

    Hi All, i want to have the individual fields in an ALV or a complete row editable... Currently i am using the REUSE_ALV_GRID_DISPLAY where in the fieldcatalog if i use the edit=X then the whole column is editable... But instead i want the particular

  • Importing Videos Clips into Adobe Premiere Pro CS6

    When  I import video clips, they are not in the correct numerical order.