Running scenario in different contexts

Hello,
I am running ODI over 2 topologies, 1 for DEV/TEST and the other one for Production which is execution only.
When I run my scenarios on Production I get the following error message:
com.sunopsis.core.SnpsInexistantObjectException: There is no connection for this logical schema/context pair:SRC_ERP_BOLINF / DEVELOPEMENT
The logical schema is indeed not define for the Developement context since we are on Production... Moreover why do I have this error while other scenarios and interfaces are working?
Can you please advise?
JF

Hi JF-
It seems like some of the variables,procedures or interface points to the DEV context in your package, eventhough you are running execution context as PRDO.
Please try to maintain a single LOGICAL SCHEMA over the pacakges so that without doing any modification you can switch over the different databases by choosing various contexts.
For Example,
LOGICAL SCHEMA : ORA_LSCHEMA
DEV development_db
TEST test_db
PROD prod_db
Hope this will help you.
Thanks,
Saravanan Rajavel

Similar Messages

  • Urgent! Important Regarding Running ODI Interfaces in Different Contexts!!

    Hi all,
    I have three different instances in which i have to run an ODI interface i.e. Development,Testing and Production .
    I have craeted interfaces in development context.
    When i am trying to run it in testing context, some of the interfaces erred out saying table or view does not exists(For $ table).
    Actually it by default picking up the Development context and creating those $ table in dev instance only.
    But when i reimport the LKM file to sql and attach the New imported LKM ,its running fine.
    IS this Problem anyway related to ODI cache.IF yes, Can anyone tell how we can delete the ODI cache before runing interfaces in different Context.
    Anyhelp or pointer regarding this will be highly appreciated.
    Thanks and Regards,
    Priyanka

    Hi Guys
    I got two interfaces: 1.load_ initial_ps_data & 2.load_hourly_ps_data in one package called LOAD_PS_DATA and one odiFileCopy that copies files from the dev environment to the shared target drive. The ODI infrastructure is as follows 1. Two Work repository running on Linux 2. Master repository running on linux. Then, I schedule'from linux box' the agent to run the scenario LOAD_PS_DATA hourly.
    My problems are as follows
    1. If the schedule agent is started from the ODI server, during its hourly execution it places "/" and end of the physical file path i.e the physical file path defined from the topology is
    //nimbari.up.ac.za/ODI_GR/guestrecord.txt so when the schedule kicks off from the linux box where ODI server is intalled it adds extra '/' as follows //nimbari.up.ac.za/ODI_GR//guestrecord.txt and when I check from the operator it say file not found. I tried to replace the '/' with '\' from the path but still not working. To my suprise, when I start the schedule agent from the ODI client installed on my windows machine every thing works fine
    2. I would like to be able to determine the context in which the package was run, during the run time, and then store that context  in a variable
    Your help will be higly appreciated
    Thanks

  • ORA-04062 error when running forms with different users

    ORA-04062 error when running forms with different users
    I have a form that has a block that should display some data from another users tables. (The other user's name is dynamic, it's selected from a list box)
    I wrote a stored procedure to get the data from other user's tables.
    When I compile the form and run it with the same user I compiled, it works without any error. But when I run the compiled form with another user I get the ORA-04062 (signature of procedure has been changed) error.
    I tried setting REMOTE_DEPENDENCIES_MODE to SIGNATURE in init.ora but it didn't help.
    My Forms version is 6i with Patch 15.
    Database version is 9.
    Here is my stored procedure:
    TYPE Scenario_Tab IS TABLE OF NUMBER(34) INDEX BY BINARY INTEGER;
    TYPE Open_Curs IS REF CURSOR;
    PROCEDURE Get_Scenarios(User_Name IN VARCHAR2, Scen_Table OUT Scenario_Tab) IS
    Curs Open_Curs;
    i NUMBER;
    BEGIN
    OPEN Curs FOR
    'SELECT Seq_No FROM '|| User_Name ||'.scenario';
    i := 1;
    LOOP
    FETCH Curs INTO Scen_Table(i);
    EXIT WHEN Curs%NOTFOUND;
    i := i + 1;
    END LOOP;
    END Get_Senarios;
    I would be happy to solve this problem. It's really important.
    Maybe somebody can tell me another way to do what I want to do. (getting a list of values from another users tables)

    I think it should be a better solution to create a package,
    and put your own TYPES and procedure into it.
    CREATE OR REPLACE PACKAGE PKG_XXX IS
    TYPE TYP_TAB_CHAR IS TABLE OF .... ;
    PROCEDURE P_XX ( Var1 IN VARCHAR2, var2 IN OUT TYP_TAB_CHAR );
    END ;
    Then in your Form :
    Declare
    var PKG_XXX.TYP_TAB_CHAR ;
    Begin
    PKG_XXX.P_XX( 'user_name', var ) ;
    End ;

  • Problem with same application under two different context root

    JDev 11.1.1.6
    Does anyone have experience with such one scenario, so same app, but with two diff context root ?
    At a certain point, since both start to be used (and just in that case), at times there was a drastic deceleration, as if something is blocking some period. Subsequently, after some time, the application start to behave normally. I also periodically comes to acceleration and deceleration. In the log files there is no trace, no exception happens, nothing.
    And all this in a situation where both applications use only one user per app (so, the resources are not concerned)
    Any comments ?

    same app, but with two diff context root ?
    A web app packaged in WAR can have only one context root. Package a web app in two different WARs for two different context roots.
      weabpp1.war web.xml
    <?xml version='1.0' encoding='UTF-8'?>
    <weblogic-web-app>
      <context-root>context-1</context-root>
    </weblogic-web-app>
      weabpp2.war web.xml
    <?xml version='1.0' encoding='UTF-8'?>
    <weblogic-web-app>
      <context-root>context-2</context-root>
    </weblogic-web-app>

  • Navigation rule with different context

    Hi,
    Could you help me to configure the navigation rule of jsf pages of two different contexts.
    Page1 - ContextA ( war 1 )
    Page 2 - ContextB ( war 2 )
    Thanks

    Not possible. Best what you can do is to use ServletContext#getContext() and then run the RequestDispatcher on it inside the action method yourself and return null/void. One prerequirement is that the application server should support the "crosscontext" feature. Refer the ServletContext's javadoc and the appserver's documentation for details.

  • Reading Attributes from different context nodes in the same view

    Hi,
    I have added a new field as part of an enhancement for Trade Promotions. This field is a checkbox and part of the context node TRADE in the view TPMOE/HeaderEOF. This field will be checked or unchecked using a logic in the background.
    The logic will be based on an attribute (Fund Plan ID) which is part of another context node FUNDPLAN in the same view.
    How can I read the attribute of FUNDPLAN context node in TRADE context node?
    A sample code will be quite helpful as I am new to CRM 2007.
    Thanks,
    Abhishek
    Edited by: Abhishek  Periwal on Oct 16, 2008 2:35 PM

    Hi Abhishek,
    If I understand your question correctly, you would like to access the Fund plan ID (in a different context node) in the getter setter methods of the check box attribute which you have added.
    The code snippet mentioend by sudeep works perfectly fine when you are making any checks in the view implementation class. But since you are in the getter setter methods of the context class, this doesnot work as "me" always refers to the class instance in which you are present.
    Now in this case what you need to do is :
    1)  create an attribute <view_controller> in your context class. Here the context is TRADE. The type of this attribute wud be same as the view controller class CL_TPMOE_HEADEREOF0_IMPL
    2) Go to the view controller class and redefine the method
    DO_VIEW_INIT_ON_ACTIVATION. This method is called only once when the view is loaded for the first time.
    3) In this method, put the following code
    me->typed_context->(Trade)->[view_controller] = me.
    by the above code, you are setting the newly created attribute to the view controller instance.
    4) The next step would be, go back to your getter setter methods or what ever it may be, try referring to the Fund plan ID by the code snippet mentioned below
    data: lr_entity type ref to cl_crm_bol_entity.
    lr_entity ?= me->[view_controller]->typed_context->[the context node in which the fund plan id is present]->get_current().
    lv_field_value = lr_entity->get_property_as_String( **pass the field name here ).
    This should definitely resolve the problem.
    Thanks,
    Vinay

  • How can I read published data from a datasocket server running in a different network?

    Hi all,
    I have been trying to solve this problem but without any success. I want to develop data acquisition Vi to run on a computer on a remote location. I want to use the datasocket technology to acquire and publish this data so that in my office (in a different network), I can read the published data and perform analysis with it. I can successfully use this approach for the two computers in the same network but not when they are in different networks. My real problem is how to specify the URL for the acquisition computer in one network while the analysis Vi runs on a different network.
    I have attached two Vis to illustrate my point (I simply want to be able to read the random numbers generated in the acquisition computer in a different network). The first Vi (RemoteDatasocketWrite.vi) will run on the remote computer with an IP address (192.168.0.110). My office computer on which RemoteDatasocketRead.vi runs has an IP address (192.168.0.11) and I can log in remotely into the remote computer using Remote Desktop Protocol with name (emelvin.001.eairlink.com) from my office computer.
     Is it possible to do what I am trying to do or is there a simpler way to solve this problem?
    I will really appreciate any help towards a solution to this problem.
    Thanks
    Attachments:
    RemoteDatasocketWrite.vi ‏9 KB
    RemoteDatasocketRead.vi ‏9 KB

    Are you getting an error? The way you have it set up, you can have an error and it will never be displayed. Put and error control on your front panel and see what it gives you. Also, shared variables in a project can be useful. Look at some examples for that.

  • Same query has vastly different run times on different DBs

    I have a query that is run on two different Oracle DBs (located on separate computers). One finishes in about 45 minutes; the other takes over two hours (how much longer, I can't tell you, as Linux keeps dropping the connection at the two-hour mark, but that's another story).
    The tables on each space have identical data; the only significant difference is, one uses tables, while the other uses materialized views. The one with the tables is the faster one.
    Both databases are running 11.2.0.2.0 Enterprise Edition 64-bit.
    I ran an Explain Plan on the queries, and noticed that the faster one had an additional couple of lines in its plan.
    Here is the query:
    select pay_plan
         , case when salary < 10000
                then 0
                when salary >= 150000
                then 15
                else floor(salary / 10000)
           end as salary_group
         , pay_grade
         , pay_step
         , fy
         , count(employee_id) as employee_count
    from
        select dcps.yr + 1 as fy
             , dcps.employee_id
             , dcpds.pay_plan
             , dcpds.pay_grade
             , dcpds.pay_step
             , sum(dcps.salary) as salary
        from
          select /*+ index(dcps dcps_location) */
                 employee_id
               , extract(year from pay_date) as yr
               , sum(
                      case when grc like 'O%'
                           then 0
                           else amt_eec * 26
                      end
                    ) as salary
          from dcps
          where location like 'W%'
          and   to_number(to_char(pay_date, 'MMDD')) between 1014 and 1027
          and   substr(grc, 1, 1) IN ('B', 'C', 'D', 'E', 'F', 'H', 'L', 'O', 'R', 'S', 'T')
          group by employee_id, extract(year from pay_date)
        ) dcps
        join
          select employee_id
               , pay_plan
               , pay_grade
               , pay_step
               , file_date
          from
            select /*+ index(dcpds dcpds_location) */
                   employee_id
                 , pay_plan
                 , pay_grade
                 , decode(pay_plan, 'YA', 0, pay_step) as pay_step
                 , file_date
                 , max(file_date)
                   over (partition by extract(year from (file_date + 61)))
                   as last_file_date
            from dcpds
            where location like 'W%'
            and   pay_plan in ('GS', 'YA')
          where file_date = last_file_date
        ) dcpds
        on (
             dcpds.employee_id = dcps.employee_id
             and dcps.yr = extract(year from dcpds.file_date)
        group by dcps.yr, dcps.employee_id, dcpds.pay_plan, dcpds.pay_grade, dcpds.pay_step
    group by pay_plan
        , case when salary < 10000
               then 0
               when salary >= 150000
               then 15
               else floor(salary / 10000)
          end
        , pay_grade
        , pay_step
        , fy;Here is the "faster" plan:
    (sorry about the formatting - it's taken from an XML version generated in Toad)
    id="0" operation="SELECT STATEMENT" optimizer="ALL_ROWS" cost="10,604,695" cardinality="46" bytes="2,346" cpu_cost="369,545,379,847" io_cost="10,595,408" time="148,466"
        id="1" operation="HASH" option="GROUP BY" cost="10,604,695" cardinality="46" bytes="2,346" cpu_cost="369,545,379,847" io_cost="10,595,408" qblock_name="SEL$1" time="148,466"
            id="2" operation="VIEW" object_owner="DDELGRANDE_DBA" object_instance="1" cost="10,604,694" cardinality="41,337" bytes="2,108,187" cpu_cost="369,477,028,079" io_cost="10,595,408" qblock_name="SEL$103D06FF" time="148,466"
                id="3" operation="HASH" option="GROUP BY" cost="10,604,694" cardinality="41,337" bytes="3,348,297" cpu_cost="369,477,028,079" io_cost="10,595,408" temp_space="4,178,000" qblock_name="SEL$103D06FF" time="148,466"
                    id="4" operation="HASH JOIN" cost="10,604,203" cardinality="41,337" bytes="3,348,297" cpu_cost="369,396,215,555" io_cost="10,594,919" temp_space="4,211,000" access_predicates="&quot;EMPLOYEE_ID&quot;=&quot;ITEM_2&quot; AND &quot;ITEM_1&quot;=EXTRACT(YEAR FROM INTERNAL_FUNCTION(&quot;FILE_DATE&quot;))" time="148,459"
                        object_ID="0" id="5" operation="VIEW" object_owner="SYS" object_name="VW_GBC_6" object_type="VIEW" object_instance="39" cost="2,195,131" cardinality="87,663" bytes="3,155,868" cpu_cost="241,010,751,843" io_cost="2,189,074" qblock_name="SEL$2EE98332" time="30,732"
                            id="6" operation="HASH" option="GROUP BY" cost="2,195,131" cardinality="87,663" bytes="3,155,868" cpu_cost="241,010,751,843" io_cost="2,189,074" temp_space="4,424,000" qblock_name="SEL$2EE98332" time="30,732"
                                id="7" operation="VIEW" object_owner="DDELGRANDE_DBA" object_instance="2" cost="2,194,600" cardinality="91,299" bytes="3,286,764" cpu_cost="240,889,683,025" io_cost="2,188,546" qblock_name="SEL$3" time="30,725"
                                    id="8" operation="HASH" option="GROUP BY" cost="2,194,600" cardinality="91,299" bytes="3,012,867" cpu_cost="240,889,683,025" io_cost="2,188,546" temp_space="4,424,000" qblock_name="SEL$3" time="30,725"
                                        object_ID="1" id="9" operation="TABLE ACCESS" option="BY INDEX ROWID" optimizer="ANALYZED" object_owner="CORP_FIN" object_name="DCPS" object_type="TABLE" object_instance="3" cost="2,194,088" cardinality="91,299" bytes="3,012,867" cpu_cost="240,769,155,979" io_cost="2,188,037" qblock_name="SEL$3" filter_predicates="TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION(&quot;DTE_PPE_END&quot;),'MMDD'))&gt;=1014 AND TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION(&quot;DTE_PPE_END&quot;),'MMDD'))&lt;=1027 AND (SUBSTR(&quot;GRC&quot;,1,1)='B' OR SUBSTR(&quot;GRC&quot;,1,1)='C' OR SUBSTR(&quot;GRC&quot;,1,1)='D' OR SUBSTR(&quot;GRC&quot;,1,1)='E' OR SUBSTR(&quot;GRC&quot;,1,1)='F' OR SUBSTR(&quot;GRC&quot;,1,1)='H' OR SUBSTR(&quot;GRC&quot;,1,1)='L' OR SUBSTR(&quot;GRC&quot;,1,1)='O' OR SUBSTR(&quot;GRC&quot;,1,1)='R' OR SUBSTR(&quot;GRC&quot;,1,1)='S' OR SUBSTR(&quot;GRC&quot;,1,1)='T')" time="30,718"
                                            object_ID="2" id="10" operation="INDEX" option="RANGE SCAN" optimizer="ANALYZED" object_owner="CORP_FIN" object_name="DCPS_LOCATION" object_type="INDEX" search_columns="1" cost="153,659" cardinality="348,929,550" cpu_cost="22,427,363,111" io_cost="153,095" qblock_name="SEL$3" access_predicates="&quot;LOCATION&quot; LIKE 'W%'" filter_predicates="&quot;LOCATION&quot; LIKE 'W%'" time="2,152"/
                        id="11" operation="VIEW" object_owner="DDELGRANDE_DBA" object_instance="5" cost="8,354,912" cardinality="23,219,146" bytes="1,044,861,570" cpu_cost="123,043,653,827" io_cost="8,351,820" qblock_name="SEL$5" filter_predicates="&quot;FILE_DATE&quot;=&quot;LAST_FILE_DATE&quot;" time="116,969"
                            id="12" operation="WINDOW" option="SORT" cost="8,354,912" cardinality="23,219,146" bytes="766,231,818" cpu_cost="123,043,653,827" io_cost="8,351,820" temp_space="1,211,565,000" qblock_name="SEL$5" time="116,969"
                                object_ID="3" id="13" operation="TABLE ACCESS" option="BY INDEX ROWID" optimizer="ANALYZED" object_owner="CORP_FIN" object_name="DCPDS" object_type="TABLE" object_instance="6" cost="8,225,535" cardinality="23,219,146" bytes="766,231,818" cpu_cost="94,120,935,947" io_cost="8,223,170" qblock_name="SEL$5" filter_predicates="&quot;PAY_PLAN&quot;='GS' OR &quot;PAY_PLAN&quot;='YA'" time="115,158"
                                    object_ID="4" id="14" operation="INDEX" option="RANGE SCAN" optimizer="ANALYZED" object_owner="DDELGRANDE_DBA" object_name="DCPDS_LOCATION" object_type="INDEX" search_columns="1" cost="19,848" cardinality="44,080,322" cpu_cost="2,837,503,343" io_cost="19,777" qblock_name="SEL$5" access_predicates="&quot;LOCATION&quot; LIKE 'W%'" filter_predicates="&quot;LOCATION&quot; LIKE 'W%'" time="278"/Here is the "slower" one:
    id="0" operation="SELECT STATEMENT" optimizer="ALL_ROWS" cost="28,025,223" cardinality="104,755" bytes="5,552,015" cpu_cost="806,125,131,535" io_cost="27,983,186" time="392,354"
        id="1" operation="HASH" option="GROUP BY" cost="28,025,223" cardinality="104,755" bytes="5,552,015" cpu_cost="806,125,131,535" io_cost="27,983,186" qblock_name="SEL$1" time="392,354"
            id="2" operation="VIEW" object_owner="DDELGRANDE_DBA" object_instance="1" cost="28,025,218" cardinality="104,755" bytes="5,552,015" cpu_cost="806,027,246,428" io_cost="27,983,186" qblock_name="SEL$1D90FC22" time="392,354"
                id="3" operation="HASH" option="GROUP BY" cost="28,025,218" cardinality="104,755" bytes="8,275,645" cpu_cost="806,027,246,428" io_cost="27,983,186" qblock_name="SEL$1D90FC22" time="392,354"
                    id="4" operation="HASH JOIN" cost="28,025,213" cardinality="104,755" bytes="8,275,645" cpu_cost="805,929,361,321" io_cost="27,983,186" temp_space="481,887,000" access_predicates="&quot;EMPLOYEE_ID&quot;=&quot;DCPS&quot;.&quot;EMPLOYEE_ID&quot; AND &quot;DCPS&quot;.&quot;YR&quot;=EXTRACT(YEAR FROM INTERNAL_FUNCTION(&quot;FILE_DATE&quot;))" time="392,353"
                        id="5" operation="VIEW" object_owner="DDELGRANDE_DBA" object_instance="2" cost="2,823,626" cardinality="10,475,527" bytes="356,167,918" cpu_cost="487,845,223,357" io_cost="2,798,186" qblock_name="SEL$3" time="39,531"
                            id="6" operation="HASH" option="GROUP BY" cost="2,823,626" cardinality="10,475,527" bytes="398,070,026" cpu_cost="487,845,223,357" io_cost="2,798,186" qblock_name="SEL$3" time="39,531"
                                object_ID="0" id="7" operation="MAT_VIEW ACCESS" option="BY INDEX ROWID" object_owner="ARMYMP" object_name="DCPS" object_type="MAT_VIEW" object_instance="3" cost="2,823,051" cardinality="10,475,527" bytes="398,070,026" cpu_cost="476,819,453,647" io_cost="2,798,186" qblock_name="SEL$3" filter_predicates="TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION(&quot;DTE_PPE_END&quot;),'MMDD'))&gt;=1014 AND TO_NUMBER(TO_CHAR(INTERNAL_FUNCTION(&quot;DTE_PPE_END&quot;),'MMDD'))&lt;=1027 AND (SUBSTR(&quot;GRC&quot;,1,1)='B' OR SUBSTR(&quot;GRC&quot;,1,1)='C' OR SUBSTR(&quot;GRC&quot;,1,1)='D' OR SUBSTR(&quot;GRC&quot;,1,1)='E' OR SUBSTR(&quot;GRC&quot;,1,1)='F' OR SUBSTR(&quot;GRC&quot;,1,1)='H' OR SUBSTR(&quot;GRC&quot;,1,1)='L' OR SUBSTR(&quot;GRC&quot;,1,1)='O' OR SUBSTR(&quot;GRC&quot;,1,1)='R' OR SUBSTR(&quot;GRC&quot;,1,1)='S' OR SUBSTR(&quot;GRC&quot;,1,1)='T')" time="39,523"
                                    object_ID="1" id="8" operation="INDEX" option="RANGE SCAN" optimizer="ANALYZED" object_owner="ARMYMP" object_name="DCPS_LOCATION" object_type="INDEX" search_columns="1" cost="281,465" cardinality="215,251,937" cpu_cost="46,870,067,520" io_cost="279,021" qblock_name="SEL$3" access_predicates="&quot;LOCATION&quot; LIKE 'W%'" filter_predicates="&quot;LOCATION&quot; LIKE 'W%'" time="3,941"/
                        id="9" operation="VIEW" object_owner="DDELGRANDE_DBA" object_instance="5" cost="25,134,240" cardinality="20,437,108" bytes="919,669,860" cpu_cost="311,591,056,432" io_cost="25,117,991" qblock_name="SEL$5" filter_predicates="&quot;FILE_DATE&quot;=&quot;LAST_FILE_DATE&quot;" time="351,880"
                            id="10" operation="WINDOW" option="SORT" cost="25,134,240" cardinality="20,437,108" bytes="633,550,348" cpu_cost="311,591,056,432" io_cost="25,117,991" temp_space="984,859,000" qblock_name="SEL$5" time="351,880"
                                object_ID="2" id="11" operation="MAT_VIEW ACCESS" option="BY INDEX ROWID" object_owner="ARMYMP" object_name="DCPDS" object_type="MAT_VIEW" object_instance="6" cost="25,024,511" cardinality="20,437,108" bytes="633,550,348" cpu_cost="286,442,201,519" io_cost="25,009,574" qblock_name="SEL$5" filter_predicates="&quot;PAY_PLAN&quot;='GS' OR &quot;PAY_PLAN&quot;='YA'" time="350,344"
                                    object_ID="3" id="12" operation="INDEX" option="RANGE SCAN" optimizer="ANALYZED" object_owner="ARMYMP" object_name="DCPDS_LOCATION" object_type="INDEX" search_columns="1" cost="52,686" cardinality="34,054,388" cpu_cost="8,896,424,679" io_cost="52,222" qblock_name="SEL$5" access_predicates="&quot;LOCATION&quot; LIKE 'W%'" filter_predicates="&quot;LOCATION&quot; LIKE 'W%'" time="738"/Notice the faster one has two extra lines in it; it is creating a SYS-based view with an additional "Group By" hash.
    Also, the faster one's Table Access By Rowid lines are marked as "Analyzed", while the slower one's Materialized View Access By Index Rowid lines are not.
    Any idea why this would happen?
    (And yes, I do notice that the cpu_cost values for the slower one tend to be 2-4x as high as for the faster one.)

    Also, the faster one's Table Access By Rowid lines are marked as "Analyzed", while the slower one's Materialized View Access By Index Rowid lines are not.Have you gathered stats on MVs table?

  • How many airport express can I add to my network for the purpose of using the airport express to send audio signal to different rooms in the house? I'm interested I'm running about six different zones.

    How many airport express can I add to my network for the purpose of using the airport express to send audio signal to different rooms in the house? I'm interested I'm running about six different zones.
    What I'm looking to do is to have self powered in ceiling speakers in every room in my house with out having to run wires to every room to carry the audio signal. I would like to use the airport express to do the job of carting the audio signal.
    Here's my set up now I have an airport extreme and one airport express that I use to carry audio to one room.

    FWIW. I have used up to four AirPort Express Base Stations (AX) for streaming sucessfully in pretty much the way you have described. I didn't have a need to try more so I can't attest that more would or would not work.

  • Data communication between jsp pages in different context

    hi,
    I've two web applications one calling the other. The calling app needs to pass user id but not in the url
    How can this be acheived.?
    Even though i searched in several forums about forward and redirect, i donot have a clear idea.
    Please help.
    Thanks.

    Use POST. The form action URL can just point to a different context.

  • Load balancing within the same ACE across two different contexts residing on the same vlan

    I'm working on a design that requires traffic be sent to a different context in the same ACE. The question I have is can this be done when both reside on the same VLAN. Would the traffic in this case be handled at layer 2 instead of layer 7. Would I have to create a seperate subnet in order to provide loadbalancing?
    |__________________|
    |   | vlan 5         |         |
        |                  |
        |                  |
    Context A        |
                           |
                           |
                        Context B
    Thanks, Jerilyn

    by design, two contexts on the same box in the same vlan can't communicate. You have to use an external L3 device.
    A workaround may be to use two diferent vlans and then bridge between them with a loopback cable.

  • Why do we need to have a observer run on the different computer

    Hello team,
    Why do we need to have a observer run on the different computer and how to install & confgure it on that different computer.?pls help
    5.5.6 Managing the Observer
    The observer is integrated in the DGMGRL client-side component and runs on a different computer from the primary or standby databases and from the computer where you manage the broker configuration. The observer continuously monitors the fast-start failover environment to ensure the primary database is available (described in Section 5.5.2.1). The observer's main purpose is to enhance high availability and lights out computing by reducing the human intervention required by the manual failover process that can add minutes or hours to downtime.
    Thanks

    854393 wrote:
    Thanks Aman,
    Do we need a same db binary & OS version for starting the observer on the different computer ?? here the primary & standby are liunx.
    Thanks in advance..
    Regards,it does not have to be on the same platform or operating system on which the databases reside, and it does not need Oracle EE and an instance (which means no extra license).
    To configure check this.
    http://gjilevski.wordpress.com/2010/03/06/configuring-10g-data-guard-broker-and-observer-for-failover-and-switchover/

  • How to Run scenario from the web using HTTP web page?

    Hi guys
    Please let me know How to Run scenario from the web using HTTP web page?
    Regards
    Janakiram

    Hi Janakiram,
    ODI provides web based UI for running the scenarios using Metadata Navigator (read only of ur ODI components) and Lighweight designer (u can edit the mapping here).
    Please explore how to install metadata navigator in ODI and have a look at ODI Setup document for more information.
    Thanks,
    Guru

  • ORA-12560: TNS:protocol adapter error. Running sqlplus in different paths.

    Hi,
    Title: ORA-12560: TNS:protocol adapter error. Running sqlplus in different paths.
    Very strange. We have windows host, 32 bit oracle software 10g, 64 bit sap kernel X64.
    We have done Kernel patch yesterday and it was successful, system up and running.
    Only problem with sqlplus. I have logged in as SIDADM user and when i open command box, the default path is C:\Documents and Settings\SIDADM. when i run sqlplus sys as sysdba, iam able to connnect without any problem but when iam in this path -   /usr/sap/SID/SYS/exe/run and run sqlplus sys as sysdba, it returns ORA-12560 error. Thats why, the BRTOOLS are also not working, giving same error.
    Is there a problem in the kernel? Where could be the problem?
    The environment variables ORACLE_SID, ORACLE_HOME are set properly.
    Many thanks,
    Mohan.

    Hi,
    Please look into below SAP recommendation from SAP note 192822:
    3. Can I run 32-bit applications on my 64-bit platform?
    This depends on the platform you are running on. For all unix- platforms, this  can be done without any problem. Even if the OS is started in 64-bit mode, it is possible to run 32-bit applications (R/3, oracle) on it. there is no need to either upgrade oracle or R/3 toa 64-bit version.
    This, however is NOT the case for windows & Linux. If you have IA64 hardware in place, you do need to run an 64-bit OS as well as all application in 64-bit mode.
    Could you please now share with me your OS (version and whether it is 32 bit or 64 bit as well), database (32 bit or 64 bit), SAP system (32 bit or 64 bit and System release) ?
    Thanks
    Sunny

  • Unix Perl Script To verify a Up and Running Database on Different Server

    Unix Perl Script To verify a Up and Running Database on Different Server
    Hi
    can any one please tell me a solution to verify a Up and Running Database on Different Server other than the one where we run the unix perl script? The perl script should check if the database is running else it must exit.
    Thanks much
    Kiran

    The other best solution would be Enterprise Manager, load the EM on the other machine and install oracle intelligent agent on all the boxes where oracle is running and problem solved.
    FTP is only a File Transfer Protocol, you can upload/download the files but cant execute them.
    Apart from EM the best way is load oracle client and make connection to all the databases.
    AND There are some free oracle monitoring software available I dont know much about them but one is NAGIO (if I am not wrong), try that if you want.
    BTW whats the problem in monitoring the boxes from the same physical box, means just schedule a script using cron on the same physical box where oracle is to either make connection using SQLPLUS or check the processes using "ps" command and if there is anything wrong then send alert from that box only. In this way there is no need to maintain a central monitoring server.
    Daljit Singh

Maybe you are looking for