Data Guard as a pass through?

Scenario is . . .
Host A is the Primary
Host B is a Standby
Host C is a Standby
Now I know we can set up A->B and A->C
Can we set up A->B->C ?
Essentially using B as a pass through between A and C. Or you can see it as B being in a DMZ.
Would B have to be Active Data Guard or anything special?
I guess what I am really asking is can a Standby be used as the source for another Standby.

See http://download.oracle.com/docs/cd/E11882_01/server.112/e10700/cascade_appx.htm#i638620 for more information on Cascaded Standby Destinations. There are a few restrictions:
Cascading has the following restrictions:
* Logical and snapshot standby databases cannot cascade primary database redo.
* SYNC destinations cannot cascade primary database redo in a Maximum Protection Data Guard configuration.
* Cascading is not supported in Data Guard configurations that contain an Oracle Real Applications Cluster (RAC) primary database.
* Cascading is not supported in Data Guard broker configurations.
Keep an eye on this chapter and Note 409013.1 "Cascaded Standby Databases" when the next patch set for 11.2 comes out :^)
Larry

Similar Messages

  • Data has changed after passing through FIFO?

    Dear experts,
    I am currently working on a digital triangular shaping using the 7966R FPGA + 5734 AI. I am using LabView 2012 SP1.
    Some days ago I have encountered a problem with my FIFOs that I have not been able to solve since. I'd be glad if somebody could point out a solution/ my error.
    Short description:
    I am writing U16 variables between ~32700-32800 to a U16 configured FIFO. The FIFO output does not coincide with the data I have been writing to the FIFO but is rather bit-shifted or something is added. This problem does not occure if I execute the VI on the dev. PC with simulated input.
    What I have done so far:
    I am reading all 4 channels of the 5734 inside a SCTL. The data is stored in 4 feedback nodes I am applying a triangular shaping to channel 0 and 1 by using 4 FIFOs that have been prefilled with a predefined number of zeros to serve as buffers. So it's something like (FB = Feedback node):
    A I/O 1  --> FB --> FIFO 1 --> FB --> FIFO 2 --> FB --> Do something
    A I/O 2  --> FB --> FIFO 3 --> FB --> FIFO 4 --> FB --> Do something
    This code shows NO weird behaviour and works as expected.
    The Problem:
    To reduce the amount of FIFOs needed I then decided to interleave the data and to use only 2 FIFOs instead of 4. You can see the code in the attachment. As you can see I have not really changed anything to the code structure in general.
    The input to the FIFO is a U16. All FIFOs are configured to store U16 data.
    The data that I am writing to the FIFO can be seen in channel 0 of the output attachment.
    The output after passing through the two FIFOs can be seen in channel 2 of the same picture.
    The output after passing through the first FIFO (times 2) can be seen in channel 3 of the picture.
    It looks like the output is bit-shifted and truncated as it enters Buffer 1. Yet the difference between the input and output is not exactly a factor of 2. I also considered the possibility that the FIFO adds both write operations (CH0 + CH1) but that also does not account for the value of the output.
    The FIFOs are all operating normally, i.e. none throws a timeout. I also tried several different orders of reading/writing to the FIFOs and different ways of ensuring this order (i.e. case strucutres, flat and stacked sequence). The FIFOs are also large enough to store the amount of data buffered no matter if I write or read first.
    Thank you very much,
    Bjorn
    Attachments:
    FPGA-code.png ‏61 KB
    FPGA-output.png ‏45 KB

    During the last couple of days I tried the following:
    1. Running the FPGA code on the development PC with simulated I/O. The behavior was normal, i.e. like I've intended the code to perform.
    2. I tested the code on the development PC with the square and sine wave generation VI as 'simulated' I/O. The code performed normal.
    3. I replaced the FIFOs with queues and ran my logic on the dev. PC. The logic performed totally normal.
    4. Right now the code is compiling with constants as inputs like you suggested...
    I am currently trying to get LabView 2013 on the development machine. It seems like my last real hope is that the issue is a bug in the XILINX 13.4 compiler tools and that the 14.4 tools will just make it disappear...
    Nevertheless I am still open for suggestions. Some additional info about my FIFOs of concerne:
    Buffer 1 and 2:
    - Type: Target Scoped
    - Elements Requested: 1023
    - Implementation: Block Memory
    - Control Logic: Target Optimal
    - Data Type: U16
    - Arbitrate for Read: Never Arbitrate
    - No. Elements Per Read: 1
    - Arbitrate for Write: Never Arbitrate
    - No. Elements Per Write: 1
    The inputs from the NI 5734 are U16 so I am wirering the right data type to the FIFOs. I also don't have any coercion dots within my FPGA VI. And so far it has only occured after the VI has been compiled onto the FPGA. Could some of the FIFOs/block memory be corrupted because we have written stuff onto the FPGA too often?

  • How to pass table data to brf plus application through abap program

    Dear All,
    i have a question related to BRF Plus management through abap program.
    In brf plus application end, Field1,field2,field3 these 3 are importing parameters.
                                           Table1->structure1->field4,field5 this is the table,with in one structure is there and 2 fields.
    in my abap program, i am getting values of fields let us take field1,field2,field3,field4,field5.
    And my question is
    1) How to pass fields to BRF Plus application from abap program.
    2)How to pass Table data to BRF Plus application from abap program.
    3)How to pass Structure data to BRF Plus application from abap program.
    4)How to get the result data from BRF Plus application to my abap program.
    And finally , how to run FDT_TEMPLATE_FUNCTION_PROCESS.
    How do i get the code automatically when calling the function in brf plus application.
    Regards
    venkata.

    Hi Prabhu,
    Since it is a Custom Fm i cant see it in my system.
    Look if u want to bring data in internal table then there could be two ways::
    1) your FM should contain itab in CHANGING option , so that u can have internal table of same type and pass through FM,
    2) read values one by one and append to internal table.
    Thanks
    Rohit G

  • How can I see how much data passes through my Time Capsule?

    I am thinking of using a cellular data plan at home. My current, rural internet provider is slow and unreliable. I use a MiFi as a backup and have 4G service which is much faster and rarely goes down. I need to see how much data is downloaded and uploaded to compare costs. All our data passes through my Time Capsule.

    A similar app to what Bob mentions is peakhour.. it works on any of the newer OS.
    https://itunes.apple.com/au/app/peakhour/id468946727?mt=12
    It is a good app. BUT.. fat ugly BUTT.. just the same as Bob has explained, it depends on SNMP to work.. and so due to apple removing a very useful and functional protocol from its airport range you can no longer use it. Bizarre.
    I strongly recommend a Netgear WNDR3800 (older model now but you can pick up one cheaply on ebay) and a 3rd party firmware called gargoyle. Apple delete my posts if I point you to it, so you will have to search yourself.
    Replace your tall TC with the Netgear as the main router.. bridge the TC to it and you can continue to use its wireless and for TM backups. The advantage is that gargoyle will not only measure everyones usage, by IP, it is able to set a quota on everyone using the net and you can set that quota for hourly, daily or weekly or monthly. It will track the usage and you can see at a glance what everyone has used.
    It is simple to load.. just like a standard firmware update. The interface is as clear as anyone can make it with such of lot of tools. And the actual router is powerful enough to provide excellent QoS and parental controls on top of measurements and quota.

  • Bex Query: make data pass through user exit calculation at navigation time

    Hi all!
    I have a new requirement and I don't know how to solve it...
    Now, when I execute a web model containing a query, the system "reads" a date and calculate the query based on that date in a user exit defined in CMOD, for example, filtering data with an interval between january and the date read.
    Besides, I have in the web model a dropdown item where user can choose other months. The dropdown item only shows single values but now if I choose a month, the query only shows data for that month.
    I need the system filters the query with the new interval. For example, between january and the new month the user has just chosen.
    Does anyone know a way to make a query pass through the user exit calculation after executing the query for the first time? Any other ideas? I need the query to "reexecute" and filter the data (create a new interval) based on the value a user chose.
    (sorry about any inconvenience, because I posted the problem in another sdn specific forum but as I received no answer I've decide to explain it in here...)
    Thank you! Points will be assigned.

    Any ideas please?

  • Does ODBC encrypts data while passing through the network?

    Does ODBC encrypts data while passing through the network?

    ODBC uses the underlying Oracle networking components to transmit data. By default, these components do not encrypt data, although they can be made to do so-- see the "SSL Encryption" thread from a few days ago.
    Justin

  • Compressing Data Passed Through WebService

    Hi there...
    Before I start explaining the problem, I am not an expert in webservices and weblogic.
    1- I am having a webservice that accepts lots of textual information and responds with lots of textual information as well. Is there an option in weblogic setting that allows data compression automatically? or should I implement data compression on the client and server?
    2- also it seems that the parameters passed through the webservice get alot of XML overhead information. Is there a way to reduce the amount of overhead information passed?
    Notice that SSL is being used.
    3- Finally what are possible causes that could lead to slow response from the server? I am getting about 8 to 10 second average response time from the server. I don't think it is the weblogic server simply because the development environment uses local LAN and it the response is much faster. any ideas?
    thanks

    1- I am having a webservice that accepts lots of textual information and responds with lots of textual information as well. Is there an option in weblogic setting that allows data compression automatically? or should I implement data compression on the client and server?
    Not that I know of, I think you have to resort to zipping the messages.
    2- also it seems that the parameters passed through the webservice get alot of XML overhead information. Is there a way to reduce the amount of overhead information passed?
    A way to reduce your XML overhead is to define small messages in your WSDL (and XSD).
    3- Finally what are possible causes that could lead to slow response from the server? I am getting about 8 to 10 second average response time from the server. I don't think it is the weblogic server simply because the development environment uses local LAN and it the response is much faster. any ideas?
    Network overhead. As you already mentioned in the other two question you are sending large messages. Maybe your system administrator has some monitoring tool for the network
    which can give you some insight in the matter.
    Information concerning WebLogic and Web Services can be found here: http://download.oracle.com/docs/cd/E21764_01/web.1111/e14529/web_services.htm

  • Data Guard phys. standby creation failing in 10g EM?

    All,
    I'm working on setting up Oracle Data Guard in a test environment at my company. It was suggested to me to go through the GUI to set up a physical standby database and automate the management of my DG setup.
    However, the several attempts that I have made result in the process failing during the "Create Standby Database" and the process fails with the following errors:
    dgcreate.recoverStby: ALTER DATABASE RECOVER CANCEL
    dgcreate.recoverStby: ALTER DATABASE OPEN READ ONLY
    SQL Error: ORA-16004: backup database requires recovery
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/u01/app/oracle/product/10.2.0/db_1/oradata/oratstdg/system01.dbf' (DBD ERROR: OCIStmtExecute)
    Can anyone pass along any ideas as to where the issue might be? I would really appreciate it - I need this up and going ASAP. Thanks!
    Matt Gordon

    Ugonic,
    I actually did create a new Physical Standby Database. I did not have an existing database configured as a standby database. In fact, the standby server that I used only had the Oracle binaries installed on it - there was no database on that system.
    Additionally, I did not have a saved configuration to use, so I generated completely new files and a new configuration.
    I left the screen alone for over an hour before checking the status and seeing these errors in the log - I am wondering if anybody else has encountered this same issue.
    On a side note, I've stumbled my way further into this process and now have a functional DG configuration. However, I can only switchover from SQL*Plus - the GUI way fails. When I click on Verify in the GUI I get the following log:
    Initializing.
    Connected to instance fcoracle1.cdps.cdp:oratest1
    Starting alert log monitor...
    Updating Data Guard link on database homepage...
    WARNING: Database oratest2.cdps.cdp is not discovered.
    Data Protection Settings:
    Protection mode : Maximum Performance
    Log Transport Mode settings:
    oratest1.cdps.cdp: ASYNC
    oratest2.cdps.cdp: ASYNC
    Checking standby redo logs.....OK
    Checking Data Guard status
    oratest1.cdps.cdp : Normal
    oratest2.cdps.cdp : Normal
    Checking Inconsistent Properties
    Checking agent status
    oratest1.cdps.cdp ... WARNING: Undefined subroutine &main::executeSQLPlusSYSDBA called at - line 5.
    WARNING: Switchover or failover may not succeed as a result.
    oratest2.cdps.cdp ... WARNING: Undefined subroutine &main::executeSQLPlusSYSDBA called at - line 5.
    WARNING: Switchover or failover may not succeed as a result.
    Switching log file 235.Done
    Checking applied log on oratest2.cdps.cdp....OK
    Processing completed.
    Have you experienced this problem and/or do you have any suggestions to resolve it?
    Matt

  • Data Guard Agent, Authentication Failure

    I'm working with two Windows 2003 servers, attempting to use one as a standby and one as a primary database use Data Guard. However, I'm having a bit of trouble when trying to get one server to communicate through the Management Agent and Management Service. I've done Management Agent installs on about 20 XP workstations and they've also worked wonderfully with the Oracle Grid Control.
    When the agent on my would-be standby database instance starts up I'm receiving the following errors in emagent.trc:
    2005-11-01 15:16:54 Thread-3836 WARN main: clear collection state due to OMS_version difference
    2005-11-01 15:16:54 Thread-3836 WARN command: Job Subsystem Timeout set at 600 seconds
    2005-11-01 15:16:54 Thread-3836 WARN upload: Upload manager has no Failure script: disabled
    2005-11-01 15:16:54 Thread-3836 WARN upload: Recovering left over xml files in upload directory
    2005-11-01 15:16:54 Thread-3836 WARN upload: Recovered 0 left over xml files in upload directory
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric RuntimeLog does not have any data columns
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric collectSnapshot does not have any data columns
    2005-11-01 15:16:54 Thread-3836 ERROR engine: [oracle_bc4j] CategoryProp NAME [VersionCategory] is not one of the valid choices
    2005-11-01 15:16:54 Thread-3836 ERROR engine: ParseError: File=D:\oracle\product\10.1.0\dg\sysman\admin\metadata\oracle_bc4j.xml, Line=486, Msg=attribute NAME in <CategoryProp> cannot be NULL
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name EFFICIENCY__BYTES_SAVED_WITH_COMPRESSION__AVG_PER_SEC_SINCE_START too long, truncating to EFFICIENCY__BYTES_SAVED_WITH_COMPRESSION__AVG_PER_SEC_SINCE_STAR
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name ESI__ERRORS__ESI_DEFAULT_FRAGMENT_SERVED__AVG_PER_SEC_SINCE_START too long, truncating to ESI__ERRORS__ESI_DEFAULT_FRAGMENT_SERVED__AVG_PER_SEC_SINCE_STAR
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_STA
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__LATENCY__MAX_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__LATENCY__MAX_PER_SEC_SINCE_STAR
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__LATENCY__AVG_PER_SEC_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__LATENCY__AVG_PER_SEC_SINCE_STAR
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVERS__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_START too long, truncating to SERVERS__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_STA
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__REQUESTS__MAX_PER_SEC_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__REQUESTS__MAX_PER_SEC_SINCE_STAR
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__REQUESTS__AVG_PER_SEC_SINCE_STAR
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Column name SERVER__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_START too long, truncating to SERVER__APP_SRVR_STATS__SERVER__OPEN_CONNECTIONS__MAX_SINCE_STAR
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric Wireless_PID does not have any data columns
    2005-11-01 15:16:54 Thread-3836 WARN metadata: Metric numberOfAppDownloadsOverInterval_instance does not have any data columns
    2005-11-01 15:17:00 Thread-4172 WARN vpxoci: OCI Error -- ErrorCode(1017): ORA-01017: invalid username/password; logon denied
    SQL = " OCISessionBegin"...
    LOGIN = dbsnmp/<PW>@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MY_DATABASE)(PORT=1521))(CONNECT_DATA=(SID=CPD2DB)))
    2005-11-01 15:17:00 Thread-4172 ERROR vpxoci: ORA-01017: invalid username/password; logon denied
    2005-11-01 15:17:00 Thread-4172 WARN vpxoci: Login 0xe8c220 failed, error=ORA-01017: invalid username/password; logon denied
    2005-11-01 15:17:00 Thread-4172 WARN TargetManager: Exception in computing dynamic properties of {MY_DATABASE, oracle_database },MonitorConfigStatus::ORA-01017: invalid username/password; logon denied
    2005-11-01 15:17:01 Thread-4172 WARN vpxoci: OCI Error -- ErrorCode(1017): ORA-01017: invalid username/password; logon denied
    I've already toggled the Local Security Policy (Log On As Batch Job) setting in Windows, unlocked the Monitoring Profile account, etc. I've also tried to set the Preferred Host Credentials for the database, but it doesn't seem to want to authenticate the Windows 2003 Administrator user.
    Anyone have any other suggestions?

    Check the following:
    Does the user have administrative privilege on the system?
    Is the user running this part of ORA_DBA group?
    Does the user have the local security policy "Logon as Batch Job"?
    Have you set the OS Preferred Credential? If you are a domain user, this will be looking for domain\user name instead of just the user name.
    On another note:
    Have you doen any upgrades to the OMS repository?
    If yes, is the new Repository compatible with the EM Console?

  • DMS Document upload: does it pass through sap DMS ?

    Dear All,
    We have a question concerning the transmission of documents from the client to DMS and the Content Server: does the document need to pass through the sap server ?
    Document upload (create document CV01N)
    Does the document go directly from the client to the Content server OR does it pass through the sap DMS before to be stored in the CS ?
    Document download (read document CV03N)
    Does the document go directly from the CS to the client OR does it pass through the sap DMS server ?
    This could be interesting to know for network performances.
    best regards,

    Hi Gurus
    is the cache server a default funtionality from the content server or any configuration is required from our part.
    is that the cache server acts as the RAM of our system?
    please explain the partitioning or biferfication of the Content server, as you told
    content server is divided into storage catagories and  this in turn in to content repositories,
    please  clarify below points,
    1)any server or PC can be made as Content server by insatalling the content server CD if iam right ?
    2) Practical and funtional benifits of partitioning content server into content repositories is it for authorization and storing data by naming convection or can it also help in copiying data from a specific content repository if needed, ( is content repositories a logical partition or practical partition like B,C,D, F drives of our PC hard disk)
    3) can /should there be multiple content server installation for a particular (production) client.
    4) Can Archiving  be done say by creating a separate content repository inside the same Content server, or is it mandatory to have a separate archiving server itself,
    please give some idea with examples
    Thanks and regards
    Kumar

  • Data Guard configuration for RAC database disappeared from Grid control

    Primary Database Environment - Three node cluster
    RAC Database 10.2.0.1.0
    Linux Red Hat 4.0 2.6.9-22 64bit
    ASM 10.2.0.1.0
    Management Agent 10.2.0.2.0
    Standby Database Environment - one Node database
    Oracle Enterprise Edition 10.2.0.1.0 Single standby
    Linux Red Hat 4.0 2.6.9-22 64bit
    ASM 10.2.0.1.0
    Management Agent 10.2.0.2.0
    Grid Control 10.2.0.1.0 - Node separate from standby and cluster environments
    Oracle 10.1.0.1.0
    Grid Control 10.2.0.1.0
    Red Hat 4.0 2.6.9-22 32bit
    After adding a logical standby database through Grid Control for a RAC database, I noticed sometime later the Data Guard configuration disappeared from Grid Control. Not sure why but it is gone. I did notice that something went wrong with the standby creation but i did not get much feedback from Grid Control. The last thing I did was to view the configuration, see output below.
    Initializing
    Connected to instance qdcls0427:ELCDV3
    Starting alert log monitor...
    Updating Data Guard link on database homepage...
    Data Protection Settings:
    Protection mode : Maximum Performance
    Log Transport Mode settings:
    ELCDV.qdx.com: ARCH
    ELXDV: ARCH
    Checking standby redo log files.....OK
    Checking Data Guard status
    ELCDV.qdx.com : ORA-16809: multiple warnings detected for the database
    ELXDV : Creation status unknown
    Checking Inconsistent Properties
    Checking agent status
    ELCDV.qdx.com
    qdcls0387.qdx.com ... OK
    qdcls0388.qdx.com ... OK
    qdcls0427.qdx.com ... OK
    ELXDV ... WARNING: No credentials available for target ELXDV
    Attempting agent ping ... OK
    Switching log file 672.Done
    WARNING: Skipping check for applied log on ELXDV : disabled
    Processing completed.
    Here are the steps followed to add the standby database in Grid Control
    Maintenance tab
    Setup and Manage Data Guard
    Logged in as sys
    Add standby database
    Create a new logical standby database
    Perform a live backup of the primary database
    Specify backup directory for staging area
    Specify standby database name and Oracle home location
    Specify file location staging area on standby node
    At the end am presented with a review of the selected options and then the standby database is created
    Has any body come across a similar issue?
    Thanks,

    Any resolution on this?
    I just created a Logical Standby database and I'm getting the same warning (WARNING: No credentials available for target ...) when I do a 'Verify Configuration' from the Data Guard page.
    Everything else seems to be working fine. Logs are being applied, etc.
    I can't figure out what credentials its looking for.

  • What is the role of Lns process in oracle 10g data guard

    Hi ,
    plz help me out to find out the actual working of lns process in oracle 10g data guard
    when i use SYNC redo transport
    the output of v$managed_stanbdy is like that ..
    PROCESS PID STATUS CLIENT_PROCESS GR# SEQ#
    ARCH 9258 CLOSING ARCH 2 498
    ARCH 9260 CLOSING ARCH 1 499
    ARCH 9262 CLOSING ARCH 2 496
    ARCH 9264 CLOSING ARCH 1 497
    LGWR 9206 CLOSING LGWR 2 482
    its not display any info about lns,thats means lns is not working in SYNC redo transport mode ?
    but if i changed it to ASYNC then the out put of v$managed_stanbdy is like this ..
    PS PID STS CPS GR# SEQ#
    ARCH 9258 CLOSING ARCH 1 509
    ARCH 9260 CLOSING ARCH 2 510
    ARCH 9262 CLOSING ARCH 1 505
    ARCH 9264 CLOSING ARCH 2 508
    LGWR 9206 CLOSING LGWR 1 503
    LNS 10528 CLOSING LNS 2 510
    Now it display all the info about lns process...
    i read in oracle documentation that lns process send redo data from primary,( through network service ) to RFS on standby side.
    but first output means that lns is not working,if not then which process send redo from primary to RFS on standby ?
    i also read in some blog that lgwr use some extra buffer size from primary db SGA ,to write redo in that buffer ,ans lns read redo from that buffer and send it to RFS on stanby side,
    i m totally confused ..can u plz help me with correct logic behind this .
    thanx in advance.

    Hello,
    On the primary database when you run the v$managed_standby, it shows up the LNS process as this process sends redo info to the standby database and on the standby database the RFS process receives the redo information.
    So on the primary database when you query the v$managed_standby, it shows up LNS and on the standby database when you query the v$managed_standby it shows up RFS. Please let us know where you are running the query.
    Refer this http://datadisk.co.uk/html_docs/oracle_dg/architecture.htm
    969752     
    Handle:     969752
    Status Level:     Newbie
    Registered:     Nov 6, 2012
    Total Posts:     9
    Total Questions:     2 (2 unresolved)
    Name     Hemendra Singh
    Location     NoidaPlease consider closing your questions by providing appropriate points and marking it as answered. Please keep the forum clean !

  • HDMI pass-through no picture

    I wonder if anyone could please help. I have just bought AppleTV and am trying to use it with my old Samsung DLP (with no HDMI but DVI) and a Denon AVR 789 (witch has HDMI pass-through) . When I connect the AppleTV through the receiver (using a HDMI lead to HDMI, them HDMI to DVI) I get no picture. When I go directly from the AppleTV to the TV using HDMI to DVI it works fine. I have tried changing the cables and also when I connect my FIOS TV box through the receiver it works fine. Also my AppleTV software is up to date Could this be something to do with HDCP ?
    I would be grateful of any help

    Yes it could be HDCP.
    Generally, I would look at the source device (tv) as the root of the problem in that it doesn't handle HDCP hopping properly, however the tv works with other receivers so I have to have my doubts about where blame lies for these problems.
    You could try powering up the devices in a different order, say from the delivery end first.
    It could be however that you simply don't have the receiver set correctly and need to match up the inputs and outputs.

  • Logical partitioning, pass-through layer, query pruning

    Hi,
    I am dealing with performance guidelines for BW and encountered few interesting topics, which however I do not fully undestand.
    1. Mainetance of logical partitioning.
    Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
    2.Pass- though layer.
    There are very few information about this basic concept.  Anyway:
    - is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
    3. Query pruning
    Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
    4. DSOs for master data loads
    What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
    Thanks,
    Marcin

    1. Mainetance of logical partitioning.
    Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
    Logical partitioning is when you have separate ODS / Cubes for separate Years etc ....
    There is no automated way - however if you want to you can physically partition the cubes using time periods and extend them regularly using the repartitioning options provided.
    2.Pass- though layer.
    There are very few information about this basic concept. Anyway:
    - is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
    Usually a pass through layer is used to
    1. Ensure data consistency
    2. Possibly use Deltas
    3. Additional transformations
    In a write optimized DSo - the request ID is key and hence delta is based on request ID. If you do not have any additional transformations - then a Write optimized DSO is essentially like your PSA.
    3. Query pruning
    Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
    The query pruning - depends on the run based and cost based optimizers within the DB and not much control over how well you can influence the execution of a query other than havin up to date statistics , building aggregates etc etc.
    4. DSOs for master data loads
    What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
    It depends more on the data volumes and also the number of transformation required...
    If you have multiple levels of transformations - use a DSO or if you have very high data volumes and want to identify changed records - then use a DSO.

  • Handling flat files in 'pass through' mode

    Is it possible to handle text files in PI in a pass through mode, i.e. just picking them up from a file adapter, and using another file adapter to send them out e.g. via FTP? Our customer wants to use the traceability of PI to do this. There would be no need for a transform.
    I suspect the answer is no, because PI processes XML messages. However, I recall that it is possible to send IDocs through in pass through mode, so I wondered if it would be possible to do the same with flat files.
    BR,
    Tony.

    Hi
    yes it is possible , you can send even image file also .
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/6d967fbc-0a01-0010-4fb4-91c6d38c5816
    The specified item was not found.
    you just to mention the dummy interfaces ..
    How to send any data (even binary) through XI, without using the Integration Repository ---good one
    this will solve your problem
    Regard's
    Chetan Ahuja
    Edited by: Chetan Ahuja on Sep 17, 2008 12:49 PM

Maybe you are looking for