Dynamic physical architecture

Hello All,
I wonder if it is possible to have a physical architecture be dynamic....
I have a comprehensive ODI framework, which calls upon certain flat files and (mostly) reads/writes from/to these using a variety of interfaces. The names (and location (directories)) of the flat files are stored as variables (extracted from a table.) The information about file locations is, therefore found in two locations: 1) in the variables and 2) in the physical architecture.
I would like to have the location information stored in the variables only and assigned dynamically to the physical architecture, so that I can have applications and users change the location.
Is this possible? In other words, can the physical architecture pointer for a flat file be represented as a variable (#<PROJNAME>.<Directory>)? If so, when is the pointer resolved and what, if any, are the constraints?
Alternatively, can the physical architecture pointers be left blank, and can the entire file resolution be done in a variable (in the interface)? In other words, can the issue of physical architecture be postponed to a point where it is resolved by the interface execution.
Here is an example....
I want to access a file, X.JNK, stored in the directory C:\MYDIR on the host A.
Today the physical architecture points to C:\MYDIR, and the interface code identifies X.JNK (as a variable <PROJNAME>.<FILENAME>, e.g. #MYPROJ.JUNKFILE = X.JNK).
The directory is also stored in a variable (as <PROJNAME><DIRECTORY>, e.g. #MYPROJ.JUNKDIRECTORY = C:\MYDIR), and so the fully qualified "address" for the file is:
#<PROJENAME><DIRECTORY>\#<PROJNAME><DIRECTORY> =
#MYPROJ.JUNKDIRECTORY\#MYPROJ.JUNKDIRECTORY =
C:\MYDIR\X.JNK
Given that setup, can the physical architecture be left blank (except for host information) and the interface be executed with #MYPROJ.JUNKDIRECTORY\#MYPROJ.JUNKDIRECTORY?
Best,
pajacobsen

Dev,
Yes, it works, although, for reasons unknown, I had to completely redo all interfaces that drew upon the affected models (I attempted a variety of other changes (modifying, for instance, the alias), but, ultimately, nothing worked until I rebuilt all the interfaces.
So, in sum, this issue is closed with the proviso that it is necessary to redo the interfaces.
Best,
pajacobsen
P.S. I often end up having to redo interfaces (rather than edit them) once I introduce changes to the models. Although the interfaces, on the surface, looks fine after the modification of the model, they don't work (I suspect a pointer problem.) I guess there are limits to ODI's flexible nature.
Edited by: user11102735 on Nov 23, 2010 1:59 PM

Similar Messages

  • Query on Physical Architecture,Logical Architecture and Model

    Hi Experts,
    I have a confusion regarding Physical Architecture,Logical Architecture and Model.Please tell me what type of information or data these above three hold.
    Thanks

    Physical architecture contains information on the physical setup of your environment i.e. server names, jdbc connection strings, usernames, Passwords etc.
    The logical architecture provides a layer of abstraction that allows you to group via contexts similar Physical architecture components which reside in different locations/environments.
    Models are the reversed representations of the objects in your physical architecture i.e. tables, flat files etc. Models are used as sources and targets in your interface design

  • Sample Oracle EBS R12 physical/architecture layout diagram

    Hello!
    Anyone can share sample Oracle EBS R12 Physical/architecture layout diagram?
    Primarily I am looking for a sample diagram that shows mid-tier..concurrent managers/reporting nodes/load partitioning and database nodes

    user10229350 wrote:
    First you stop responding with unnecessary links which are of no help.
    Hussein's comment in this thread was spot-on correct.
    Multi-posting is poor forum etiquette at best and outright rudeness that exhibits arrogance at worst.
    Your other thread has an appropriate response that directs you to the product's documentation.
    That documentation exists to guide you how to use the product as designed.
    This thread is locked.

  • Hyperion Planning - Physical Architecture

    Hi Experts,
    I'm checking through the connections in the physical architecture of ODI for ERPi. When I test the connection to Planning I get the following error:
    Connection failed
    java.lang.Exception
    I have checked the ODI agent is working and the server reference and port are correct.
    I'm hoping someone will be able to show me the light!? Also, would this impact drillthrough?
    Thanks in advance.
    Mark

    Mark,
    ODI has no bearing on DrillBack. All ODI is used for is to retrieve information out of Oracle EBS and provide it to ERPi. ERPi then at that point will load the data into the desired target; based on the target application settings.
    For the error; it seems that either something is not valid .... or is not able to be 'tested'. Please follow the documentation noted in KM: Configure and use ERPi to load data into Financial Data Quality Management from EBS [ID 951369.1]
    Thank you,

  • Best (Thrifty) physical architecture for medium-size environment?

    What is the absolute best physical architecture you would come up with for this medium size environment?

    Thrifty? The SharePoint CALs alone will run you at least $1.5 million unless you already own these or have a good EA. You said "basic collaboration" so I'm assuming you don't also need Enterprise CALs though note some of the features you listed
    could require enterprise licensing depending on how you're using them.
    The information you haven't provided isn't enough to make an appropriate recommendation. While you have 10,000 users it's not clear how many concurrent users you expect, or what sort of workloads these users will have.
    Are the users all in one location? Spread out geographically? What types of network connections? Why types of networking devices (such as a load balancer for your second option)
    Is this virtual or physical? What are the hardware specs of the servers?
    How many documents do you expect to have? How large will they be? What sort of storage is backing this farm? How many site collections, sites, lists, libraries, items do you expect? Workflows, third party solutions, customizations, etc. The list goes on.
    Neither of these options provide high availability. Is this a concern for you? I ask because in my experience a SharePoint farm used by over 10,000 people needs to be highly available. If you need HA then you'll need more servers.
    Honestly this forum isn't the best place to get the information you're looking for (i.e. an architectural design). Instead I recommend asking specific questions about specific problems you have with your design.
    My question(s) then for you are: 1. Why are these the two designs you have? (in another way: What are the decisions and requirements you have that led you to these options) and 2. What is the challenge you're having in picking one of them?
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

  • Physical architecture of SSAS

    Hi experts,
     can anyone please help me the physical architecture of SSAS. It will be very helpful for me to improve my skill.
    Thanks

    Hi,
    You can start here:
    Analysis Services Multidimensional Data - Physical Architecture:
    http://msdn.microsoft.com/en-us/library/bb522614.aspx
    SSAS Data Mining Physical Architecture: http://msdn.microsoft.com/en-us/library/bb510502.aspx
    Regards,
    Zoli

  • Microsoft Excel Physical Architecture not possible to create.

    Hello:
    I´m using ODI 11.1.1.5 and I have problems to load a simple Excel file.
    Following instructions of different blogs, I don't have in ODI the Microsoft Excel "Technology" under the "Physical Architecture Section" under "Technologies" folder.
    I only have a "File" folder inside "Technology" folder.
    When I try to create a new technology, I get a message saying: "Unable to save Microsoft Excel (ODI-17591: Name 'MICROSOFT_EXCEL' is already used)"
    Can anyone help me with this?

    Are you trying to create a dataserver under Excel or actually create a new technology?
    I think you want to right click excel and add a dataserver, try a different name - It might be getting confused with the code derived from the name you are currently providing.

  • Heatsink, logic board and physical architecture

    Want to ask anyone who knows the macbook physical architecture:
    How's the heatsink related to logic board? Is it etched to the logic board or are they seperated? Does a replacement of logic board also replace the heatsink?
    Thank you!

    So according to what you say, my RSD problem is still not solved.
    That's jumping to conclusions.
    There are many possible reasons why a computer might suddenly shut down.
    With the first version of the MacBook many people reported the following:
    a) The computer would shut down many times, seemingly at random.
    b) After waiting a while they were able to turn the computer back on again.
    c) Either replacing the heat sink assembly or installing the SMC firmware update fixed the problem.
    But I see from one of your other posts that in your case:
    a) You had only one shut down.
    b) Your MacBook would not start up at all afterwards.
    c) The shut down occurred after you'd installed the SMC firmware update.
    This suggests that in the case of your MacBook the shutdown had a different cause and would therefore require a different remedy.
    I recommend you just use your MacBook normally. After a while you will be able to judge whether the shop has correctly diagnosed and fixed the problem.

  • Dynamic physical table name vs. Cache

    Hello, Experts!
    I'm facing quite an interesting problem. I have two physical tables with the same structure but with a different data. Requirement is to show same reports with one or another table. Idea is to have dynamically changed physical table name with session variable usage. Session variable can be change in UI so it was working until cache was turned on. When cache is turned on logical statements sent to OBI backend are the same even for different values of session variable that stores physical table name. Once cache is populated every users will get values from cache. This is possible source of discrepancy because some users might run reports with tableA values and some with tableB values.
    Are there any options to set OBI to use data related to proper physical table name (i.e. accordingly to session variable value)? Model clone is not an option because it will be way to hard and complex to maintain both, beside same reports need to work sometimes with one table name and sometimes with other...
    PS. Cache is set to be common for all users.
    Lucas

    thank you, I've found another way to make it running. In fact there are two ways of doing it: filter LTS and have all data filtered from single table with session variable or use fragmentation content also with session variable.
    Now tricky part is to set variable from UI, currently I'm using issue raw sql: call NQSSetSessionValue( 'String SV_SIGNOFF=aaa;' ) but I have to figure out how to change session non system variable value without need of administrator user rights.
    There is GoURL method, but it's not working...
    2. Add In ORACLE_HOME/bifoundation/web/display/authenticationschemas.xml
    <RequestVariable source="url" type="informational" nameInSource="lang"
    biVariableName="NQ_SESSION.LOCALE" />
    inside the top <AuthenticationSchemaGroup> </AuthenticationSchemaGroup> tag

  • Dynamic physical table source schema

    Hi All,
    Is it possible to dynamically change the physical schema name ?
    I am in process of creating a report, but the report should shows the data based on logged in user.
    NOTE: There is a one to one mapping between BI user and oracle database schema.
    Any advice is greatly appreciated.

    Hi Venky ,
    I think proxy authentication is what I want.
    In my case BI Server's has fixed connection to 'BI_Account' username.
    I have created oracle database account's ( say account1 and account2 . I already have BI Server accounts with same name) which has 'connect through' privileges.
    I tested in sqlplus that it when user connect to BI_Account[account1]/BI_Account@my_db it is being connected to 'account1' schema, which is exactly what I want.
    How do I replicate this in OBIEE ?
    what I have to do?
    How do I tell BI Server that 'BI_Account' is a proxy account ?
    How do I tell BI Server that when user 'account1' get the data from account1 schema ?
    What else I have to do ?

  • Dynamic Physical File names on AL11

    Hi,
    I have the following requirement.
    I have almost 25 files to be exported onto AL11 using OpenHub from various infoproviders and I used the "Logical File name" option to do it.
    I would like to have the physical file names as " ZINFOPROV_<DATE>_<TIME> " and Infoprovider tech. name should get populated dynamically somehow.
    I was looking at the following thread for the same req : Re: Open Hub File name Change
    I defined my logical file name definition as
    Logical file    Z_SALES
    Name            Used for SALES
    Physical file   ZINFOPROV_<DATE>_<TIME>
    Data format     ASC
    Applicat.area   BW
    Logical path    Z_SALES
    I would like this ZINFOPROV populated dynamically.
    Any have any inputs/suggestions.
    Regards,
    Kumar

    Hi Kumar,
    You have a very good question and also shown the viewer of your post the way forward. Well i got your answer from your question only. Considering <DATE> as one kind of literals, you need a literal to identify the infoprovider as one part of your filename. Now my answer is not direct resolution but it may lead you to your answer. Here are the list of other literals that might be of use to you.....
    Extract from the help.......
    Physical path name
    Platform-specific physical path under which files are stored. It most contain the reserved word <FILENAME> as a placeholder for the file name. IT can also contain further reserved words (see below).
    Use
    The physical path name is used by the function module FILE_GET_NAME at run time to create a complete platform-specific file name.
    Procedure
    Specify a path. You can include the following reserved words in pointed brackets. These are replaced with current values at runtime.
    Reserved Word    Replacement Text
    <OPSYS> Operating system in call
    <INSTANCE> R/3 application instance
    <SYSID> R/3 application name in SY-SYSID
    <DBSYS> Database system in SY-DBSYS
    <SAPRL> R/3 release in SY-SAPRL
    <HOST> Host name in SY-HOST
    <CLIENT> Client in SY-MANDT
    <LANGUAGE> Log on language in SY-LANGU
    <DATE> Date in SY-DATUM
    <YEAR> Year in SY-DATUM, 4-character
    <SYEAR> Year in SY-DATUM, 2-character
    <MONTH> Month in SY-DATUM
    <DAY> Day in SY-DATUM
    <WEEKDAY> Day of the week in SY-FDAYW
    <TIME> Time in SY-UZEIT
    <STIME> Hour and minute in SY-UZEIT
    <HOUR> Hour in SY-UZEIT
    <MINUTE> Minute in SY-UZEIT
    <SECOND> Seconds in SY-UZEIT
    <PARAM_1> External parameter 1
    <PARAM_2> External parameter 2
    <PARAM_3> External parameter 3
    <P=name> Name of a profile parameter (see Report RSPARAM for valid values)
    <V=name> Name of a variable (stored in variable table)
    <F=name> Return value of a function module Naming convention for this function module: FILENAME_EXIT_name
    <Y=name> Return value of a function module Naming convention for this function module: Y_FILENAME_EXIT_name
    <Z=name> Return value of a function module Naming convention for this function module: Z_FILENAME_EXIT_name
    Examples
    Local directory for temporary files under Windows NT: D:\GLOBAL\ARCHIVE\<PARAM_1>\<FILENAME>
    Local directory for temporary files under DOS:C:\TMP\<FILENAME>
    Local directory for temporary files under UNIX: /usr/<SYSID>/local/tmp/<CLIENT>/<FILENAME>
    Dependencies
    The function module for the run-time variable <F=name> must meet the following requirements:
    Name
    The name must begin with "FILENAME_EXIT_".
    Parameter
    An export parameter with the name "OUTPUT" must exist.  No structure may exist for this parameters.
    Import parameters are only supported if they have default values.
    Table parameters are not supported.
    Valid example: FILENAME_EXIT_EXAMPLE.
    I hope this leads you to your resolution.
    Regards
    Raj
    Edited by: Rajesh J Salecha on Oct 9, 2009 1:48 PM - Not able to make it properly formatted...
    Edited by: Rajesh J Salecha on Oct 9, 2009 1:54 PM

  • Dynamic app architecture

    I have application that displays customer data in the table form. I don’t know the data structure in advance, so I came up with the following architecture:
    I have report.jsp page that include reporttable.jsp or custom reportable.jsp
    report.jsp
            <f:subview id="invTable">
                 <jsp:include page="#{reportBean.tableJsp}"/>
            </f:subview>
    ...tableJsp string is pointed to /reporttable.jsp or /customer1/reporttable.jsp depending on the customer needs.
    reporttable.jsp
            <h:dataTable  id="tableData" value="#{reportBean.reportData}" var="rpt" >
           <c:forEach items="#{reportBean.columnNames}" var="name">
                <h:column>
               <f:facet name="header">
                    <h:outputText value="#{name}" />
                  </f:facet>
               <h:outputText value="#{rpt[name]}"/>
             </h:column>
           </c:forEach>
            </h:dataTable>
    ...For the customer that need something different I create customer1Bean that will return customized reportDate
    /customer1/reportable.jsp
    It maybe different from the generic jsp, but also can be almost the same except the bean it calls
            <h:dataTable  id="tableData" value="#{customer1Bean.reportData}" var="rpt" >
    ...I am wondering, if there is a way to call correct bean dynamically, so I don’t have to duplicate pages if they are the same.
    I would appreciate any suggestions.
    Irina.

    Thank you, <af:switcher> will work, but a lot of code need to be duplicated.
    If I have 10 customers that need to have something special in reportData I will need to duplicate table and whatever else needed 10 times when I need different reportBean.
    <af:switcher facetName="#{userData.custName}" defaultFacet="default">
    <f:facet name="default">
            <h:dataTable  id="tableData" value="#{reportBean.reportData}" var="rpt" >
           <c:forEach items="#{reportBean.columnNames}" var="name">
                <h:column>
               <f:facet name="header">
                    <h:outputText value="#{name}" />
                  </f:facet>
               <h:outputText value="#{rpt[name]}"/>
             </h:column>
           </c:forEach>
            </h:dataTable>
    </f:facet >
    <f:facet name="customer1">
            <h:dataTable  id="tableData" value="#{customer1.reportData}" var="rpt" >
           <c:forEach items="#{customer1.columnNames}" var="name">
                <h:column>
               <f:facet name="header">
                    <h:outputText value="#{name}" />
                  </f:facet>
               <h:outputText value="#{rpt[name]}"/>
             </h:column>
           </c:forEach>
            </h:dataTable>
    </f:facet >
    <f:facet name="customer2">
            <h:dataTable  id="tableData" value="#{customer2.reportData}" var="rpt" >
           <c:forEach items="#{customer2.columnNames}" var="name">
                <h:column>
               <f:facet name="header">
                    <h:outputText value="#{name}" />
                  </f:facet>
               <h:outputText value="#{rpt[name]}"/>
             </h:column>
           </c:forEach>
            </h:dataTable>
    </f:facet >
    </af:switcher>Solution that I have so far is (may not be the best):
    in Class Customer I initialized reportBean based on customer name
            if(custName.equals("customer1"))
                reportBean = new Customer1Bean(reportMenuSel);
            else if(custName.equals("customer2"))
                reportBean = new Customer2Bean(reportMenuSel);
            else
                reportBean = new ReportBean(reportMenuSel);
    ..Customer1Bean extends ReportBean
    In jsp page I called cust.reportBean.reportData which gave me an access to correct reportData without duplicating jsp code.
            <h:dataTable  id="tableData" value="#{cust.reportBean.reportData}" var="rpt" >
           <c:forEach items="#{ cust.reportBean.columnNames}" var="name">
                <h:column>
               <f:facet name="header">
                    <h:outputText value="#{name}" />
                  </f:facet>
               <h:outputText value="#{rpt[name]}"/>
             </h:column>
           </c:forEach>
            </h:dataTable>
    ...Irina.

  • Create a dynamic physical file path in FILE tcode

    Hi ,
    I have a requirement where i need to create a file on application server. the physical path depend on the month in which it is executing.
    For example I have a file with the name 29082011_hh:mm:ss.dat.
    This file should be stored in the directory file015\FI\appl\Aug\.
    So i require to create the logical file path as file15\Fi\appl\<month>\<filename>
    My question is do we need to maintain the folders for all the months in AL11 or it can be generated at runtime
    Plz reply

    You must create the directories beforehand.

  • What will be physical architecture for SSAS Tabular model with Sharepoint 2013?

    I would like to build SharePoint 2013 based reporting portal with Power View as visualization tool.
    I have 300-800 million rows and customer puts priorities in high performance.
    Data source is SQL Server database and Flat Files.
    What will be recommended physical archicture? Is following good?
    -SQL Server database server
    -SSAS Tabular application server
    -SharePoint 2013 Front-End (multiple if needed)
    Kenny_I

    Hi Kenny_I,
    According to your description, you want to if it's OK to SQL Server database server, SSAS tabular application server and SharePoint 2013 Front-End on different server, right?
    As I said in another thread, even though we can install all of them on same physical server, however in the production environment, the recommendation is that install them on different server, it will beneficial to troubleshooting the issue when some
    issue occur on the environment. So in your scenario, it Ok to install all the application on different server. Here are some links about requirement to install the application.
    System requirements for Power View
    Hardware
    Sizing a Tabular Solution (SQL Server Analysis Services)
    If I have anything misunderstood, please point it out.
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • How to setup a physical architecture for MS ACCESS in Solaris

    There isn't any odbc in Solaris platform. How can I access to MS ACCESS files?
    Thanks in advance!

    Hi, here are two solutions from metalink.
    a) Setting up a Sunopsis Agent on the Microsoft Windows system hosting the Access database which will use the ODBC / JDBC bridge for connecting to the Access database. The data may then, for example, be loaded by an Integration Interface into a Database on a Unix system for further processing.
    b) Seting up a Sunopsis Package made up of the following steps (to be executed on an Agent set up on the appropriate Microsoft Windows host)
    - 1. Run the SnpsSQLUnload Tool to extract the data to a Flat File on the Microsoft Windows host
    - 2. Use the SnpsFTP tool to transfer the file to a Unix system
    - 3. Run an Integration Interface from the Unix system file as Source.

Maybe you are looking for