Best practice for sharing data with model window

Hi team,
what would the best practice for sharing data with a modal
window be ? I use a modal window to display record details from a
record list, but i am not quite sure how to access the data from
the components in the main application in the modal window.
Any hints would be welcome
Best
Frank

Pass a reference to the parent into the modal popup. Then you
can reference anything in the parent scope.
I haven't done this i 2.0 yet so I can't give you code. I'll
post if I do.
Oh, also, you can reference the parent using parentDocument.
So in the popup you could do:
parentDocument.myPublicVariable = "whatever";
Tracy

Similar Messages

  • Best Practices for sharing media with iMovie and FCPX

    So I've a large iMovie Events directory, and would like to use that media with both iMovie and FCPX projects.
    I'd rather not duplicate the media, so would prefer to import as references into FCPX.
    The dilemma is that I see that it's possible to modify or move media from within the iMovie application, and therefore break the reference to that media with FCPX.
    I only see two options:  (1) Never Ever modify the location/name of media in the iMovie Events file (even from within the iMovie app) since I would break an FCPX link if that media is referenced, or (2) always import (copy) the iMovie events into the FCPX Event Library making an independent original so that I can confidently operate on those media files in either application.
    I'd surely rather not have to do (2 )(e.g. doubling my storage demands) to gain the flexibility of using either application to edit the video, but really don't want to live with the restrictions of (1).
    Thoughts / Solutions?  What might you consider as options or best practices?

    Unless there is some other reason, users should own the right to share their mailboxes - it shouldn't be something that demands administrator management (if only so that the administrators aren't swamped by user requests for sharing their mailboxes). 
    For true shared mailboxes, when the mailbox is created, full access is granted by an administrator.

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best practice for heirachical data

    First off, I have to say that JMX in Java 6 is terrific stuff. Bundling jconsole in with Java has made JMX adoption so much easier for us.
    Now, to my question. We have read-only hierarchical data (think a DOM tree) that we would like to publish via JMX. What is the best practice? We see two possibilities:
    1. Publish each node of the tree with it's own object name and type. This will allow jconsole to display the information in the tree control.
    2. Publish just the root of the tree with an object name and type and then use CompositeType to describe the nodes of the tree. This means you look at the tree in the "Attribute Value" panel of jconsole.
    Is there any best practices for such data? We have implemented #2 and it works but we are wondering if long term this might lead to unforeseen consequences.
    Thanks in advance.
    --Marty                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Path,
    I did go with #1 and it worked out great. Every node in our tree is an ObjectName node. Works very well for us.
    --Marty                                                                                                                                                                                                                                                                   

  • Best practices for apps integration with third party systems ?

    Hi all
    I would like to know if there is any document from oracle or from your own regarding best practices for apps integration with third party systems.
    For example, in particular, let's say we need customization in a given module(ex:payables) need to provide data to a third party system, consider following:
    outbound interface:
    1)should third party system should be given with direct access to oracle database to access a particular payments data information table/view to look for data ?
    2) should oracle create a file to third party system, so that it can read and do what it need to do?
    inbound:
    1) should third party should directly login and insert data into tables which holds response data?
    2) again, should third party create file and oralce apps will pick up for further processing?
    again, there could be lot of company specific scenarios like it has to be real time or not... etc...
    How does companies make sure third party systems are not directly dipping into other systems (oracle apps/others), so that it will follow certain integration best practices.
    how does enterprise architectute will play a role in this? can we apply SOA standards? should use request/reply using Tibco etc?
    Many oracle apps implementations customizations are more or less directly interacting with third party systems by including code to login into respective third party systems and vice versa.
    Let me your know if you have done differently and that would help oracle apps community.
    thanks
    rrb.

    you want to send idoc to third party system (NONSAP).
    what kind of system is it? can it handle http requests
    or
    can it handle webservice?
    which version of R/3 you are using?
    what is the mechanism the receiving system has, to receive data?
    Regards
    Raja

  • Best practice for a site with a lot of images?

    I am working on a site that will have over a hundred images
    and I wanted to see what is the best practice for designing a site
    like this. Should a go with xml(please give examples or
    explanation), a text file or just loadMovie("image1project1.jpg",
    "bottomsec") with named external images that will stay the same.
    Any help is appreciated on staying up to date with this kind of
    site.
    Thanks,
    Randy

    ok I am new please be nice - I think I want to set it up like
    this
    <project1>
    <section>Architecture</section>
    <name>New Building for CREATiVENESS</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project1.jpg</thumb>
    <img1>images/project1img1.jpg</img1>
    <img2>images/project1img2.jpg</img2>
    <img3>images/project1img3.jpg</img3>
    <img4>images/project1img4.jpg</img4>
    </project1>
    <project2>
    <section>Interiors</section>
    <name>New Building for Me</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project2.jpg</thumb>
    <img1>images/project2img1.jpg</img1>
    <img2>images/project2img2.jpg</img2>
    <img3>images/project2img3.jpg</img3>
    <img4>images/project2img4.jpg</img4>
    </project2>
    <project3>
    <section>Architecture</section>
    <name>New Building for You</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project3.jpg</thumb>
    <img1>images/project3img1.jpg</img1>
    <img2>images/project3img2.jpg</img2>
    <img3>images/project3img3.jpg</img3>
    <img4>images/project3img4.jpg</img4>
    </project3>
    <project4>
    <section>Interiors</section>
    <name>New Building for that guy</name>
    <comment>The major challenge to designing this new
    tower was the site constraints  a small 3 acre urban corner site.
    It is located adjacent to a community center to facilitate extended
    use in the evenings and weekends for the entire community.
    </comment>
    <thumb>thumbs/project4.jpg</thumb>
    <img1>images/project4img1.jpg</img1>
    <img2>images/project4img2.jpg</img2>
    <img3>images/project4img3.jpg</img3>
    <img4>images/project4img4.jpg</img4>
    </project4>
    but I am not sure of the way to create the way to run through
    it to find if it is in a section to put it in the menu and then to
    call the images and text once they are in a project area. I dont
    know if the
    this.firstChild.nextSibling.childNodes[0].childNodes[2]
    is the best way to call things in the file. Any help is
    appreciated. Please let me know what are the best practices and
    easiest way to work with a large xml file.
    Thanks,
    Randy

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • Best practices for initial data loads to MDM

    Hi,
       We need to load more than 300000 vendors from SAP into MDM production repository. Import server might take days to load that much if no error occurs.
    Are there any best practices for initial loads to MDM available? What considerations must be made while doing the initial loads.
    Harsha

    Hello Harsh
    With SP05 patch1 there is a file aggregation functionality in the import port. Is is supposed to optimize the import performance.
    BTW, give me your mail address and I will send you an idoc packaging paper for MDM.
    Regards,
    Goekhan

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best practices for periodic reboot of MS Windows 2003 Server

    Hi there,
    We are a 4-month old v9.3.1 Essbase environment running on a 8-CPU MS Windows 2003 Enterprise Edition server and we run data loads twice daily as well as full outline restructures overnight. Data loads are executed both via SQL retrieves (ie, from a view set up on another server) and via data file loads after clearing the outline and rebuilding it.
    Over time, I have noticed that the performance of the server was degrading markedly and that calculation scripts are taking longer. Funnily enough, the server's performance goes back to what I would expect it to be after a full reboot.
    My questions are as follows:
    1. Is it typically best practice to reboot MS Windows 2003 servers when dealing with a heavily accessed environment?
    2. If yes, is it mentioned anywhere in the Essbase manuals that MS Windows servers ought to be rebooted on a periodic basis in order to perform at their optimal best?
    3. Does Microsoft recommend such a practice of rebooting their servers on a periodic basis? I looked throughout their KnowledgeBase but couldn't find any mention of this fact in spite of the fact that it is obvious that a periodic reboot boosts the performance of MS Windows servers.
    Thanks in advance for your responses/recommendations
    J

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services over to the new infrastructure .   We are planning on rolling out
    2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best practices for PAL and HANA Modeling

    Hi all
    I wanted to get your opinion on working with Predictive Analysis Library and SAP HANA modeling- analytical views, calculated views, etc.
    If I have transactional data modelled into analytical view and now wish to run a predictive function on some of it's columns, what are you guys' suggestions on how to best go about building a PAL model within HANA?
    Do I run the PAL procedure every time the table is updated? How do I decide on the frequency of the pal procedure? Any tips?
    And do I use triggers for automated procedure runs?
    What are some of the best practices?
    If this has been brought up before, please point me to the right discussion.
    Thanks!

    Hi all
    I wanted to get your opinion on working with Predictive Analysis Library and SAP HANA modeling- analytical views, calculated views, etc.
    If I have transactional data modelled into analytical view and now wish to run a predictive function on some of it's columns, what are you guys' suggestions on how to best go about building a PAL model within HANA?
    Do I run the PAL procedure every time the table is updated? How do I decide on the frequency of the pal procedure? Any tips?
    And do I use triggers for automated procedure runs?
    What are some of the best practices?
    If this has been brought up before, please point me to the right discussion.
    Thanks!

  • Obiee 11g : Best practice for filtering data allowed to user

    Hi gurus,
    I have a table of the allowed areas for each user.
    I want to show only the data facts associated with these allowed areas.
    For instance my user scott can see France and Italy data.
    I made a variable session. I put this session variable in a filter.
    It works ok but only one value (the first one i think) is taken in account (for instance, with my solution scott will see only france data).
    I need all the possible values.
    I tried with the row wise parameter of the variable session. But it doesn't work (error obiee).
    I've read things on internet about using stragg or valuelistof but neither worked.
    What would be the best practice to achieve this goal of filtering data with conditions by user stored in database ?
    Thanks in advance, Emmanuel

    Check this link
    http://oraclebizint.wordpress.com/2008/06/30/oracle-bi-ee-1013332-row-level-security-and-row-wise-intialized-session-variables/

  • Best practices for sharing a SAN-attached tape loader between servers.

    When configuring zones to allow a tape loader to be shared by multiple servers, is there a preferred zoning method?
    For instance, I have my primary fabric configured so that the zone for each data server using a LUN on my array consists of the primary port of the HBA on the server and the primary port of the HBA for each controller on my array.
    My backup server does not use any LUNs on the array, so its zone consists solely of its primary HBA port and the HBA port of the tape loader.
    If I want give my data servers access to the tape loader, should I add the tape loader's port to the zone of each server, or should I add the port of each server to the zone that currently consists of only the backup server and the tape loader?
    Or does it matter?
    The network is small:
    One Windows server dedicated to backup, three NetWare servers handling data storage and 12 other VM servers running a mixture of Linux, NetWare, and Windows that handle various services but don't contain any significant amounts of data.
    My intent is to give the 3 data servers access to the tape loader directly, so that their backup streams don't involve the LAN.
    The remaining servers are small enough that backing them up over the LAN is not an issue.
    I doubt that it matters for this, but the SAN switches are MDS9124's and the SAN array is an HDS AMS2100 with active/active controllers.
    All server HBA's are dual port, as are the HBA's on each array controller.
    In addition to the primary zone, each server and the array controllers are attached to a failover zone via the 2nd port of the HBA's.
    Unfortunately, my backup software doesn't support NDMP, so I can't back up the array LUN's directly to the tape loader.

    NDMP is for backing up NAS platforms.
    Does your backup software support "LAN-Free" backup ? Typically enterprise backup software like Netbackup, TSM, Networker require a special license/agent that gets loaded on server where you are going to implement LAN-Free backups. Without that software/license servers will be fighting for tape resources and it will be a mess (if it works at all). Also you want to use dedicated HBA or port on dual HBA for tape traffic, do not mix tape and disk traffic on the same HBA/port. In big shops people configure dedicated "tape" VSANs but that would be an overkill for your current environment.
    @dynamoxxx

Maybe you are looking for