ClusterTools5.0 Internal Architecture

Dear All,
I am a PhD student and are working on HPC area. And I am looking CT5.0 source code to understand the special feature of it different with other MPI implementation. But it has very few documentation, unlike others.
Does any one suggest some books, papers, sldes or website for Cluster Tools internal architecture?
Thanks very much
Noah Yan

Online marketing is not PIA. It is a separate web app. The standard PIA PeopleTools concepts do not apply to online marketing.
1. You determine what happens when a user clicks a button in online marketing through the online marketing dialog designer. You can execute PeopleCode or Java at any point in the dialog by adding a custom action. You can find an example of a custom Java action and a custom PeopleCode action in the PeopleBook "PeopleSoft Enterprise Online Marketing 9 PeopleBook > Using Extensions."
2. Yes, you can manually create profile tables for OLM. Sorry, I can't remember how to do this. I have never needed to do this. It appears that you know how to create a profile and use those profile fields in dialog designer. If you are interested in looking at the profile field data and responses, then look at PS_RA_PROFILE, PS_RA_ATTRIBUTE, and PS_RA_ATTR_CHOICES. If you want to display this data on a PIA page or report off this data, then you can create views against these tables.
3. I believe that OLM (online marketing) just stores choices/answers in the RA tables. I don't think it updates any student admin tables. I could be wrong on this though. Your rep should be able to put you in contact with someone that can give you a definitive answer on that. If you have to update the student admin tables yourself, then I would create the profile fields with whatever names seem appropriate to you, then use a PeopleCode custom action to either insert the prospect's answers into the student admin tables or run the answers through a CI (CI is preferred).
CRM and Siebel are different products. CRM is the PeopleSoft product acquired by Oracle through the PeopleSoft acquisition. CRM is built using the PeopleTools technology. Siebel was acquired by Oracle through a separate acquisition and is a CRM product built using a different toolset.

Similar Messages

  • ADF BC+ Faces Internal architecture

    Can someone share me where i can find the Internal architecture of ADF BC+ Faces.
    How Entity VO, App Module ,Backing/Managed Beans interact.
    Thanks in Advance.

    Thank you very much Frank/Shay/Timo..
    Iam looking for sequence diagrams for a typical request life cycle(from the jspx to the entity object) ..
    Here are some querys lingering in my mind..
    1) In typical Faces environments (JSF) When a form is submitted the data submitted (anything..even the Value objects) will be available in the Managed Bean where we can play with the data (thats being submitted).
    How can i access the Value Object getting submitted in ADF the same way ?? That increases our flexibility
    2) I visualised the AppModule is a kind of facade, and it plays around with the ORM entity objects to get the CRUD jobs done..Is this correct??
    From the entity object generate code it seems its getting injected somewhere..(dependency injection)
    So when a user submits a button.. the request goes to the Managed Bean..calls the App Module.. which calls the Entity Objects Crud operation..
    (Now WHere does Updateable VO fit in.. )
    Thanks in advance
    Edited by: user11922045 on Oct 27, 2010 9:03 AM

  • Oracle database Internal architecture

    Hi
    Any one Can explane me about oracle database internal architecture
    thanks & regards

    user9098698 wrote:
    Hi
    Any one Can explane me about oracle database internal architectureYes, there are people who can explain this to you.
    How much time do you want to spend? I've been learning the Oracle internal architecture for over 20 years and there are still new things to learn. (Well ... it also depends on what you mean by 'internal architecture'.)
    If you want the basics - the Oracle Documentation set at http://tahiti.oracle.com has a Concepts manual that does a decent job of describing the basic architecture. And the Windows Platform Guide, the Unix Administrator's Reference give a fair bit of additional information. As does the Oracle Networking reference.
    And there are courses. Oracle University courses provide a decent overview of the architecture. And Jonathan Lewis, Richard Foote and others give excellent courses and seminars on specific aspects of the architecture.
    ANd there are books. Search for books by Tom Kyte. Robert Stackowiak.
    And internet search.
    And http://orainfo.com/architecture/architecture.htm
    And ...

  • Internal architecture of R-tree implementation

    Does anyone know where I can get whitepapers which describe the architecture of R-tree implementation in Oracle?
    I need it for my undergraduate thesis. It will be a case study of R-tree implementation in Oracle Spatial. Actually I want to focus on why R-tree is implemented while scientists still shape it into a more applicable and reliable state.
    But ya, if it is too hard, perhaps I will focus on the architecture itself.
    Anyone can help?

    Several papers were published in the last few
    ICDE, SIGMOD and SSTD conferences about how R-trees are implemented in Oracle.
    You can get most of the architecture details from those papers.

  • Image of the HotSpot architecture

    Hi everybody,
    I'm looking for a good picture of the internal architecture of HotSpot with all steps :
    - classloader => loading -> linking (verification, preparation, resolution, access control) -> initialisation
    - compilation => client / server
    - bytecode interpreter
    - garbage collector
    - memory
    - etc
    I searched on the web but I found anything :(
    Have you got a link please ?
    thanks for your help
    Obelix

    Hi,
    the virtual machine itself does that.
    HotSpot is a just-in-time compiler that compiles java byte code to native code on the fly for performance reasons.
    [Link to Virtual Machine Specification|http://java.sun.com/docs/books/jvms/second_edition/html/VMSpecTOC.doc.html]
    regards,
    Owen

  • Add an internal SCSI HD to G3 MT

    Hi everybody,
    Your advice again please. I obtained a used Quantum Viking 3.5 series 68-pin internal HD. Can it be installed in my G3 MT as a second HD? I suppose a cable with appropriate connector (& terminator?) is needed. What are the set-up procedures and to what should I pay special attention?
    Can the same HD be installed in my G4 AGP? Many thanks.
    Henry

    Henry,
    This site has some additional information available that you might like to read.
    Ultra 2 SCSI drive FAQ
    http://www.transintl.com/technotes/macupgfaq.htm
    "my Gossamer G3 300 MT there was an onboard SCSI bus that was shared with the external DB-25 SCSI port at the rear. I thus wonder if I could connect the Quantum drive to the G3 motherboard via a 50-pin cable (I got a 68 to 50 adapter at home for the HD)."
    Short answer, Yes it will work fine. It will even work in a 6100/7100/8100 or a Quadra series computer. If you buy a PCI SCSI controller card, you can get faster performance but that is more money when you are ready.
    "Structurally I know it is possible but am just concerned that G3 would not accept this extra HD internally. "
    You are correct. It can be done with the adapter. In fact, if you place the adapter into the onboard connector, you can then have a 68 pin cable run to all new drives. You must be running an IDE drive in your MT now. The computer will accept the drive just fine. Finding a 68 pin ribbon cable with a terminator attached would be the best solution. A recycler in Seattle sells them with a terminator for less than $5 US. Hong Kong will have similar options available to you. My son found that adapter in a small shop in Tokyo. Easy to find.
    "Today the G3 refused to boot up after the 'gong' sound, after I had opened up the case to look at the internal architecture. I had to disconnect the USB card, extra CD drive, reduce the RAM to 128MB, change the PRAM battery, reset the CUDA button and zap the PRAM etc to bring it back to life. "
    I think that a 'warm boot' (using a keyboard command to restart after the 'gong sound' or startup chime) should have been your first step to avoid all the other disruption to the system. Less effort is sometimes more effective.
    "Would the addition of the SCSI drive disturb the harmony of my existing system? I want to have a second physical HD in G3 to house the OS 9.2.2."
    Short answer, no disturbance. In my first post, I assumed the possibility of an existing SCSI HDD in your MT. Since you do not have the SCSI card that was included with G3 servers, the drive you have now is IDE. It is on a different controller. Therefor, you do not have to worry about ID conflicts until you add a third drive. If you can find those drives for about $5 US then you can add several more economically on the 68 pin ribbon I suggested earlier. You will have slower performance until you buy a faster controller card but all other parts will work with a new card if you find one.
    "My G4 AGP does not have an onboard SCSI bus and therefore is out of question. Thanks again for your advice."
    If you do some careful shopping, you can find a PCI SCSI controller card that will fit in your G4 and the G3. Grant suggested one last year and my son bought two for $25 apiece. If you can find a computer recycler, they might have one in an old tower for $10. Someone posted that they found a G3 in London for 10 Pounds UK So I know others are finding good deals. Watch for parts still in used computers and the upgrade will not be out of the question.
    Good luck,
    Ji˜m
    By the way, May 16, 2006 8:01 AM - a year ago, Mack Wong from Hong Kong posted with a similar request for help installing a 68 pin drive in his 6100. He found the adapter and cable and the drive worked just fine for him. You have plenty of options available to you in Hong Kong.

  • Oracle IPM, UCM architecture

    I'm a newbie to Oracle ECM. After setting ECM & IPM, I'm bit confused on the repository. Is Oracle ECM not a unified CMS? Are there separate repositories for IPM and UCM?
    I shut down the content server and even then IPM was working fine. What is the advantage I get if I connect IPM and UCM using CIS?
    Does anyone have any sorts of material which describe the underlying internal architecture of UCM and IPM?
    Thanks!

    In the beginning, there was a company, Optika and a company Stellent. Optika had a product called Acorde, later called IBPM, now called IPM. Stellent had a product called UCM. Stellent bought Optika and ultimately Oracle bought Stellent.
    As of IPM version 7.7, the repository remained unique to IPM and does not use the UCM repository. IPM is workflow oriented more so than UCM. UCM, is what it the acronym says, Universal Content Management.
    I'll leave it to others to describe where the 2 products are headed. There is a public wiki at wiki.oracle.com that eventually will have this info.
    If you can get to:
    http://www.oracle.com/products/middleware/content-management/index.html
    you should find more info.
    32U

  • Oracle 10G New Feature........Part 1

    Dear all,
    from last couple of days i was very busy with my oracle 10g box,so i think this is right time to
    share some intresting feature on 10g and some internal stuff with all of you.
    Have a look :-
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Oracle 10g Memory and Storage Feature.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    1.Automatic Memory Management.
    2.Online Segment Shrink
    3.Redolog Advisor, checkpointing
    4.Multiple Temporary tablespace.
    5.Automatic Workload Repository
    6.Active Session History
    7.Misc
    a)Rename Tablespace
    b)Bigfile tablespace
    c)flushing buffer cache
    8.ORACLE INTERNAL
    a)undocumented parameter (_log_blocks_during_backup)
    b)X$ view (x$messages view)
    c)Internal Structure of Controlfile
    1.Automatic memory management
    ================================
    This feature reduce the overhead of oracle DBA.previously mostly time we need to set diff oracle SGA parameter for
    better performance with the help of own experience,advice views and by monitoring the behaviour
    of oracle database.
    this was just time consuming activity.........
    Now this feature makes easy life for oracle DBA.
    Just set SGA_TARGET parameter and it automatically allocate memory to different SGA parameter.
    it focus on DB_CACHE_SIZE
    SHARED_POOL_SIZE
    LARGE_POOL
    JAVA_POOL
    and automatically set it as
    __db_cache_size
    __shared_pool_size
    __large_pool_size
    __java_pool_size
    check it in alert_log
    MMAN(memory manager) process is new in 10g and this is responsible for sga tuning task.
    it automatically increase and decrease the SGA parameters value as per the requirement.
    Benefit:- Maximum utlization of available SGA memory.
    2.Online Segment Shrink.
    ==========================
    hmmmmm again a new feature by oracle to reduce the downtime.Now oracle mainly focus on availablity
    thats why its always try to reduce the downtime by intrducing new feature.
    in previous version ,reducing High water mark of table was possible by
    Exp/imp
    or
    alter table move....cmd. but on these method tables was not available for normal use for long hrs if it has more data.
    but in 10g with just few command we can reduce the HWmark of table.
    this feature is available for ASSM tablespaces.
    1.alter table emp enable row movement.
    2.alter table emp shrink space.
    the second cmd have two phases
    first phase is to compact the segment and in this phase DML operations are allowed.
    second phase(shrink phase)oracle shrink the HWM of table, DML operation will be blocked at that time for short duration.
    So if want to shrink the HWM of table then we should use it with two diff command
    first compact the segment and then shrink it on non-peak hrs.
    alter table emp shrink space compact. (This cmd doesn't block the DML operation.)
    and alter table emp shrink space. (This cmd should be on non-peak hrs.)
    Benefit:- better full table scan.
    3.Redolog Advisor and checkpointing
    ================================================================
    now oracle will suggest the size of redo log file by V$INSTANCE_RECOVERY
    SELECT OPTIMAL_LOGFILE_SIZE
    FROM V$INSTANCE_RECOVERY
    this value is influence with the value of FAST_START_MTTR_TARGET .
    Checkpointing
    Automatic checkpointing will be enable after setting FAST_START_MTTR_TARGET to non-zero value.
    4.Multiple Temporary tablespace.
    ==================================
    Now we can manage multiple temp tablespace under one group.
    we can create a tablespace group implicitly when we include the TABLESPACE GROUP clause in the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement and the specified tablespace group does not currently exist.
    For example, if group1 is not exists,then the following statements create this groups with new tablespace
    CREATE TEMPORARY TABLESPACE temp1 TEMPFILE '/u02/oracle/data/temp01.dbf'
    SIZE 50M
    TABLESPACE GROUP group1;
    --Add Existing temp tablespace into group by
    alter tablespace temp2 tablespace group group1.
    --we can also assign the temp tablespace group on database level as default temp tablespace.
    ALTER DATABASE <db name> DEFAULT TEMPORARY TABLESPACE group1;
    benefit:- Better I/O
    One sql can use more then one temp tablespace
    5.AWR(Automatic Workload Repository):-
    ================================== AWR is built in Repository and Central point of Oracle 10g.Oracle self managing activities
    is fully dependent on AWR.by default after 1 hr, oracle capure all database uses information and store in AWR with the help of
    MMON process.we called it Memory monitor process.and all these information are kept upto 7 days(default) and after that it automatically purge.
    we can generate a AWR report by
    SQL> @?/rdbms/admin/awrrpt
    Just like statspack report but its a advance and diff version of statspack,it provide more information of Database as well as OS.
    it show report in Html and Text format.
    we can also take manually snapshot for AWR by
    BEGIN
    DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
    END;
    **The STATISTICS_LEVEL initialization parameter must be set to the TYPICAL or ALL to enable the Automatic Workload Repository.
    [oracle@RMSORA1 oracle]$ sqlplus / as sysdba
    SQL*Plus: Release 10.1.0.2.0 - Production on Fri Mar 17 10:37:22 2006
    Copyright (c) 1982, 2004, Oracle. All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    SQL> @?/rdbms/admin/awrrpt
    Current Instance
    ~~~~~~~~~~~~~~~~
    DB Id DB Name Inst Num Instance
    4174002554 RMSORA 1 rmsora
    Specify the Report Type
    ~~~~~~~~~~~~~~~~~~~~~~~
    Would you like an HTML report, or a plain text report?
    Enter 'html' for an HTML report, or 'text' for plain text
    Defaults to 'html'
    Enter value for report_type: text
    Type Specified: text
    Instances in this Workload Repository schema
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    DB Id Inst Num DB Name Instance Host
    * 4174002554 1 RMSORA rmsora RMSORA1
    Using 4174002554 for database Id
    Using 1 for instance number
    Specify the number of days of snapshots to choose from
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Entering the number of days (n) will result in the most recent
    (n) days of snapshots being listed. Pressing <return> without
    specifying a number lists all completed snapshots.
    Listing the last 3 days of Completed Snapshots
    Snap
    Instance DB Name Snap Id Snap Started Level
    rmsora RMSORA 16186 16 Mar 2006 17:33 1
    16187 16 Mar 2006 18:00 1
    16206 17 Mar 2006 03:30 1
    16207 17 Mar 2006 04:00 1
    16208 17 Mar 2006 04:30 1
    16209 17 Mar 2006 05:00 1
    16210 17 Mar 2006 05:31 1
    16211 17 Mar 2006 06:00 1
    16212 17 Mar 2006 06:30 1
    16213 17 Mar 2006 07:00 1
    16214 17 Mar 2006 07:30 1
    16215 17 Mar 2006 08:01 1
    16216 17 Mar 2006 08:30 1
    16217 17 Mar 2006 09:00 1
    Specify the Begin and End Snapshot Ids
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Enter value for begin_snap: 16216
    Begin Snapshot Id specified: 16216
    Enter value for end_snap: 16217
    End Snapshot Id specified: 16217
    Specify the Report Name
    ~~~~~~~~~~~~~~~~~~~~~~~
    The default report file name is awrrpt_1_16216_16217.txt. To use this name,
    press <return> to continue, otherwise enter an alternative.
    Benefit:- Now DBA have more free time to play games.....................:-)
    Advance version of statspack
    more DB and OS information with self managing capabilty
    New Automatic alert and database advisor with the help of AWR.
    6.Active Session History:-
    ==========================
    V$active_session_history is view that contain the recent session history.
    the memory for ASH is comes from SGA and it can't more then 5% of Shared pool.
    So we can get latest and active session report from v$active_session_history view and also get histortical data of
    of session from DBA_HIST_ACTIVE_SESS_HISTORY.
    v$active_session_history include some imp column like:-
    ~SQL identifier of SQL statement
    ~Object number, file number, and block number
    ~Wait event identifier and parameters
    ~Session identifier and session serial number
    ~Module and action name
    ~Client identifier of the session
    7.Misc:-
    ========
    Rename Tablespace:-
    =================
    in 10g,we can even rename a tablespace by
    alter tablespace <tb_name> rename to <tb_name_new>;
    This command will update the controlfile,data dictionary and datafile header,but dbf filename will be same.
    **we can't rename system and sysaux tablespace.
    Bigfile tablespace:-
    ====================
    Bigfile tablespace contain only one datafile.
    A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile.
    Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment-space management.
    we can take the advantage of bigfile tablespace when we are using ASM or other logical volume with RAID.
    without ASM or RAID ,it gives poor response.
    syntax:-
    CREATE BIGFILE TABLESPACE bigtbs
    Flushing Buffer Cache:-
    ======================
    This option is same as flushing the shared pool,but only available with 10g.
    but i don't know, whats the use of this command in prod database......
    anyway we can check and try it on test server for tuning n testing some query etc....
    SQL> alter system flush buffer_cache;
    System altered.
    ++++++++++++++++++
    8.Oracle Internal
    ++++++++++++++++++
    Here is some stuff that is not related with 10g but have some intresting things.
    a)undocumented parameter "_log_blocks_during_backup"
    ++++++++++++++++++++++++
    as we know that oracle has generate more redo logs during hotbackup mode because
    oracle has to maintain the a complete copy of block into redolog due to split block.
    we can also change this behaviour by setting this parameter to False.
    If Oracle block size equals the operating system block size.thus reducing the amount of redo generated
    during a hot backup.
    WITHOUT ORACLE SUPPORT DON'T SET IT ON PROD DATABASE.THIS DOCUMENT IS JUST FOR INFORMATIONAL PURPOSE.
    b)some X$ views (X$messages)
    ++++++++++++++++
    if you are intresting in oracle internal architecture then x$ view is right place for getting some intresting things.
    X$messages :-it show all the actions that a background process do.
    select * from x$messages;
    like:-
    lock memory at startup MMAN
    Memory Management MMAN
    Handle sga_target resize MMAN
    Reset advisory pool when advisory turned ON MMAN
    Complete deferred initialization of components MMAN
    lock memory timeout action MMAN
    tune undo retention MMNL
    MMNL Periodic MQL Selector MMNL
    ASH Sampler (KEWA) MMNL
    MMON SWRF Raw Metrics Capture MMNL
    reload failed KSPD callbacks MMON
    SGA memory tuning MMON
    background recovery area alert action MMON
    Flashback Marker MMON
    tablespace alert monitor MMON
    Open/close flashback thread RVWR
    RVWR IO's RVWR
    kfcl instance recovery SMON
    c)Internal Structure of Controlfile
    ++++++++++++++++++++++++++++++++++++
    The contents of the current controlfile can be dumped in text form.
    Dump Level Dump Contains
    1 only the file header
    2 just the file header, the database info record, and checkpoint progress records
    3 all record types, but just the earliest and latest records for circular reuse record types
    4 as above, but includes the 4 most recent records for circular reuse record types
    5+ as above, but the number of circular reuse records included doubles with each level
    the session must be connected AS SYSDBA
    alter session set events 'immediate trace name controlf level 5';
    This dump show lots of intresting information.
    it also show rman recordes if we used this controlfile in rman backup.
    Thanks
    Kuljeet Pal Singh

    You can find each doc in html and pdf format on the Documentation Library<br>
    You can too download all the documentation in html format to have all on your own computer here (445.8MB)<br>
    <br>
    Nicolas.

  • Deployment in enterprise environment

    Dear Team members,
    I would like to post a question to you regarding deployment doubts that we are trying to address.
    My company is working on the new version of our primary application previously built as a J2EE application with some reporting functions with Flex, and we want to use AIR in order to leverage its possibilities:
    Seamless integration with existing application functionalities (implemented as standard JEE web application pages) thanks to the integrated HTML capabilities
    Improved integration of the user interface with the desktop
    Native processes to provide additional functionalities
    Our application is targeted to pharmaceutical industry, subject to FDA regulations, and it affects more than 5000 users for each customer, so we have some specific requirements affecting the deployment and distribution of the software:
    Allow to run multiple versions of the software on the same client machine (to support test and acceptance activities in addition to the production environment)
    Minimize the effort of the initial setup on each client
    Manage the version upgrades without manual activities on each client
    Keep the test/acceptance and production environments strictly aligned to improve effectiveness of formal validation (ideally, an application once validated should be transported in production without any source code modification, recompilation or repackaging)
    The current browser-based strategy is perfectly fit to these requirements, and in the shift towards a desktop-based strategy we need to continue satisfying them as much as possible. We evaluated the standard distribution strategy of Adobe AIR applications, and noticed several attention points in this scenario.
    The first issue we encountered is the back-end services endpoint discovery problem. Simply hardcoding a server URL in the packaged application could be a viable solution for public internet-accessible applications, but we need to support multiple customers in their intranet, and each one typically requires multiple environments for the application (acceptance, production, etc.). Maintaining dozens of different packages of the AIR application to support all these customer environments clearly is not the solution. Neither we want to force thousands of different users to enter and maintain the correct server location in their local preferences.
    So, we thought to use a badge hosted in the back-end application to run the local AIR application: using the underlying API, we could activate the application specifying also the network location of the back-end services. We could also rely on the badge to install the application (and the AIR runtime if necessary)… however, application packaged as native installers cannot be installed, upgraded, or launched by the badge API (and we need to package ours as native to use native processes).
    We also noticed that multiple versions of an AIR application cannot be installed side-by-side in a client machine, and that the installation and upgrade of the application can be performed only when the local user has administrative rights on the machine (using standard or native packages), forcing us to rely on external software distribution systems in some customer scenarios (introducing additional complexities in the release cycle)
    At this point, in our opinion the standard deployment strategies of Adobe AIR applications are unfit for enterprise environments. In the enterprise world, many of the applications have migrated to a completely browser-based solution, while others enhanced their client layer to comply with the requirements, for example installing only a thin portion of the client code and allowing to connect to multiple server versions/environments with it (e.g. the SAP GUI universal client). Without smarter deployment and distribution tools, AIR applications currently are a step back compared to web applications in terms of manageability.
    So, we are trying to develop a solution to address these problems, with some concepts similar to JStart: install on the client machine a launcher application capable of being activated from a web page, dynamically locate, download and run the actual client bytecode, transparently enforce client software updates, and supporting multiple applications (and multiple versions of the same application). However, we are facing many technical problems due to internal architecture of AIR and we already spent a considerable amount of effort trying to find a solution. We are now thinking to return on the choice of AIR, going back to Flex.
    What is the position of Adobe on this argument? Is Adobe aware of these issues and are there any plans on this topic? Any advice?
    Thank you in advance

    For those following along, Oliver Goldman will be answering this post in future articles on his blog.
    Many great comments and questions here. I’m working on some follow-up posts to address these; nothing I could cram into this comment field would really do your query justice. - Oliver Goldman
    Pursuit of Simplicity
    Chris

  • Deployment of an Adobe AIR application in an enterprise environment

    Dear Team members,
    first of all my apologies for posting this thread in more than one forum (see Installations Issues) but the argument is very important to us and I don't know where discuss it.
    I would like to post a question to you regarding deployment doubts that we are trying to address.
    My company is working on the new version of our primary application previously built as a J2EE application with some reporting functions with Flex, and we want to use AIR in order to leverage its possibilities:
    Seamless integration with existing application functionalities (implemented as standard JEE web application pages) thanks to the integrated HTML capabilities
    Improved integration of the user interface with the desktop
    Native processes to provide additional functionalities
    Our application is targeted to pharmaceutical industry, subject to FDA regulations, and it affects more than 5000 users for each customer, so we have some specific requirements affecting the deployment and distribution of the software:
    Allow to run multiple versions of the software on the same client machine (to support test and acceptance activities in addition to the production environment)
    Minimize the effort of the initial setup on each client
    Manage the version upgrades without manual activities on each client
    Keep the test/acceptance and production environments strictly aligned to improve effectiveness of formal validation (ideally, an application once validated should be transported in production without any source code modification, recompilation or repackaging)
    The current browser-based strategy is perfectly fit to these requirements, and in the shift towards a desktop-based strategy we need to continue satisfying them as much as possible. We evaluated the standard distribution strategy of Adobe AIR applications, and noticed several attention points in this scenario.
    The first issue we encountered is the back-end services endpoint discovery problem. Simply hardcoding a server URL in the packaged application could be a viable solution for public internet-accessible applications, but we need to support multiple customers in their intranet, and each one typically requires multiple environments for the application (acceptance, production, etc.). Maintaining dozens of different packages of the AIR application to support all these customer environments clearly is not the solution. Neither we want to force thousands of different users to enter and maintain the correct server location in their local preferences.
    So, we thought to use a badge hosted in the back-end application to run the local AIR application: using the underlying API, we could activate the application specifying also the network location of the back-end services. We could also rely on the badge to install the application (and the AIR runtime if necessary)… however, application packaged as native installers cannot be installed, upgraded, or launched by the badge API (and we need to package ours as native to use native processes).
    We also noticed that multiple versions of an AIR application cannot be installed side-by-side in a client machine, and that the installation and upgrade of the application can be performed only when the local user has administrative rights on the machine (using standard or native packages), forcing us to rely on external software distribution systems in some customer scenarios (introducing additional complexities in the release cycle).
    At this point, in our opinion the standard deployment strategies of Adobe AIR applications are unfit for enterprise environments. In the enterprise world, many of the applications have migrated to a completely browser-based solution, while others enhanced their client layer to comply with the requirements, for example installing only a thin portion of the client code and allowing to connect to multiple server versions/environments with it (e.g. the SAP GUI universal client). Without smarter deployment and distribution tools, AIR applications currently are a step back compared to web applications in terms of manageability.
    So, we are trying to develop a solution to address these problems, with some concepts similar to JStart: install on the client machine a launcher application capable of being activated from a web page, dynamically locate, download and run the actual client bytecode, transparently enforce client software updates, and supporting multiple applications (and multiple versions of the same application). However, we are facing many technical problems due to internal architecture of AIR and we already spent a considerable amount of effort trying to find a solution. We are now thinking to return on the choice of AIR, going back to Flex.
    What is the position of Adobe on this argument? Is Adobe aware of these issues and are there any plans on this topic? Any advice?
    Thank you in advance

    For those following along, Oliver Goldman will be answering this post in future articles on his blog.
    Many great comments and questions here. I’m working on some follow-up posts to address these; nothing I could cram into this comment field would really do your query justice. - Oliver Goldman
    Pursuit of Simplicity
    Chris

  • What is new with MAX 2.0 and is it compatible with Session Manager?

    We added non-IVI instrument information in, basically the same structure as for IVI instruments,
    into the ivi.ini file to keep all instrument information in the same place. Using MAX Version 1.1 caused no problems whatsoever and the system worked fine. With the advent of MAX 2.0 you seem to use ivi.ini as well as config.mxs to store instrument information. What we have found now is that given a working ivi.ini file from MAX 1.1, we end up with 2 or 3 copies of all the devices in the IVI Instruments->Devices section! When the duplicate entries are deleted and the application exited, the
    ivi.ini file is updated minus the [Hardware->] sections which contain the resource descriptors that our appl
    ications look for. As an added complication, under MAX 2.1 (From an evaluation of the Switch Executive) It behaves the same, except that it almost always crashes with one of the following errors. 'OLEChannelWnd Fatal Error', or 'Error#26 window.cpp line 10028 Labview Version 6.0.2' Once opened and closed MAX 2.1 will not open again! (Note we do not have LabVIEW on the system.) What is the relationship between the config.mxs and ivi.ini now? Also, your Session Manager application (for use with TestStand) extracts information from ivi.ini and may expect entries to be manually entered into ivi.ini (e.g. NISessionManager_Reset = True) i.e. Is the TestStand Session Manager compatible with MAX 2.0?

    Brian,
    The primary difference between MAX 1.1 and 2.x is that there is a new internal architecture. MAX 2.x synchronizes data between the config.mxs and the ivi.ini. The reason you're having trouble is that user-editing of the ivi.ini file is not supported with MAX 2.x.
    Some better solutions to accomplish what you want:
    1. Do as Mark Ireton suggested in his answer
    2. Use the IVI Run-Time Configuration functions. They will allow you to dynamically configure your Logical Names, Virtual Instruments, Instrument Drivers, and Devices. You can then use your own format for storing and retrieving that information, and use the relevant pieces for each execution. You can find information on these functions in the IVI CVI Help file located in Start >> National I
    nstruments >> IVI Driver Toolset folder. Go to the chapter on Run-time Initialization Configuration.
    I strongly suggest #2, because those functions will continue to be supported in the future, while other mechanisms may not be.
    --Bankim
    Bankim Tejani
    National Instruments

  • Business area and profit center

    plz can anyone explain me what is the diffrence between business area and profit center.

    Business Area is a type of organizational unit structure, primarily used to facilitate external segment reporting across company codes, covering the company's main areas of operation. Business Areas run across company codes. The relationship is n:n. Business Area is part of FI. 
    Profit Center is also a type of organization unit structure, used for management/ internal controlling purposes. Dividing the company up into profit centers allows one to analyze areas of responsibility and to delegate responsibility to decentralized units, thus treating them as “companies within the company”. The profit centers also run across company codes. The relationship is n:n. Profit Center Accounting is part of Enterprise Controlling.
    While it may look apprantly that both Business Area and Profit Center do the same kind of analysis and reporting, with in SAP internal architecture they are kept in different modules and different tables.
    SAP Note 321190 provides the difference betwen Business Area and Profit Center, which you can get from SAP Service Market place.
    SAP has maintained Business Area from 4.6B onwards, while there are enhancements happening in Profit Center side.

  • Why does the Reduced Size PDF command create larger files?

    I have been a happy user of Adobe Acrobat vers. 7 until Adobe ceased the support of that version and I was unable to print and do other activities since my last Mac OS vers.10.6.8 update, so I was forced to upgrade to Adobe's Acrobat's X latest version 10.0.0.  I had hope to be able to produce Adobe Acrobat 3D files but found out later that Adobe does not do that anymore and I would have to buy it as a plugin from Tetra 4D for another $399.00, oh well. I then wanted to reduce the file size of some larger pdfs I had edited.  It took a while to locate the command location, now in File>Save>Reduced SizePDF, once I activated the command on my file it actually increased its size!!  Why did this happen?  Should I continue to use my disabled, Adobe unsupported vers 7 to get the superior performance I once had?  This issue was mentioned in an earlier thread that started out as to where to locate this command but the resultant larger file issue remained unanswered.  Currently unhappy on my purchase.  Will Adobe offer a free upgrade to fix this?

    I agree with Peter. Use Pages '09, but keep a document migration strategy present. When you create a Pages document in Pages '09, export it to Word .doc too. Export any current content in Pages v5.2.2, and then stop using it. That means no double-clicked documents.
    Pages v5 is not an evolution of Pages '09 — it is an incomplete rewrite, and uses a different, internal document architecture than did Pages '09. This internal architecture is to blame for the export translation bloat into Word's different document architecture. It does not matter whether you type a single character, or multiple pages, you will get about a 500Kb .docx file.
    Opening, and resaving this monster .docx file in MS Word in Office for Mac 2011, LibreOffice, or TextEdit will result in a very small .docx footprint.

  • Table embedded in document causes ID 5.5 and 6 versions to crash

    Hello.
    We are a university that updates and publishes a catalog 4 x a year - this includes a table created in a much older ID version. Now, opening this bulletin and working in 5.5 and 6, every bulletin has crashing issues that revolve around this table.
    Tips online suggested saving the ID6 file as an .idml and reopen in the newer software. But now the .idml is preventing ID6 from opening.  THIS after running the Maverick OSx updates.
    Yes, I could rebuild this but it's big and I'd rather find the root and fix it.
    Help?
    The crash report mentions, among a lot of data, a long list of plugins. I woudn't know where to find them or what to do with them.
    The short version of one line: InDesign.AppFramework 0x15970876 GetPlugIn + 876534

    Hello Gene,
    I have been checking into this for you and have found out that what you are trying to do cannot be done.  At least not in the current manner you are hoping for.  The internal architecture of DAQmx Base requires the cross-compiling capability of the LabVIEW Mobile Module. While a stand-alone compiler can compile DAQmx Base calls for desktop processors, it cannot compile DAQmx Base calls for ARM.
    If your application requirements exclude the LabVIEW toolchain, then the remaining option is the USB Driver Development Kit which is avaliable here, but you will have to contact your local field sales representative to discuss support
    options as standard phone and e-mail support are not available for the
    NI Measurement Hardware DDK. 
    NI Measurement Hardware Driver Development Kit
    http://digital.ni.com/express.nsf/bycode/exyv4w?opendocument&lang=en&node=seminar_US
    ColeR
    Field Engineer

  • Widget insertion not working DW CS5 Mac

    I've got a template open and am trying to insert the JBuilder Accordian widget.  I've done all the necessary processes to download the widget from Adobe and put it into my "My Widgets" in the browser.  I click on Insert / Widget and get nothing, no dialog asking for which widget to insert.  This also fails from the Insert panel and clicking on the Widget icon.
    Is this a bug or am I doing something wrong?
    Thanks,
    Steve

    Maybe it would help if I re-asked the question in different terms: 
    a) Does DW5 on MacOS monitor the user interface for a "cancel current operation" input?  If so, what is it?
    b) Is there any alternative to "Force Quit" to cancel a "long" operation?
    Note 1: Monitoring the UI is problematic because it slows down the application;  how much depends on how often.  Depending on the  app's internal architecture, it may simply not be possible to stop certain operations cleanly.  Both these would be reasonable motivations to omit a UI cancel function.
    Note 2:  I think MacOS "Force Quit" sends a Unix "kill" signal (SIGKILL) which stops a running process in its tracks.  The process disappears,  and unsaved changes are lost.   I think Unix defines milder signals, signifying something like, "please stop what you are doing", but I have no idea if these are effective in the case of DW.

Maybe you are looking for

  • Please assist with Creative MediaSource installat

    I'm installing the Creative MediaSource software for my new NOMAD Jukebox Zen Xtra 40GB and I've been given the option of installing any or all of the following components: - Creative MediaSource Nomad 2/3/Zen Plugin - Creative MediaSource Jukebox Pl

  • Front End Services won't start with new cert, SChannel error about hostname

    We have an existing Lync 2013 Enterprise system set up, and many of the servers are using certs issues by our local CA. I want to move several of the certs to third-party certificates so that non-domain machines can connect. The first change I'm maki

  • I'd like to report a possible violation of the ToU

    Dear hosts, this "discussion" http://discussions.apple.com/thread.jspa?threadID=2715872&tstart=0 is, in my opinion, an extreme example of how users misuse the forum to garner points and advance their status. The question(s) by user captfred are so ob

  • [Request] SQL Ledger - double entry accounting system

    Hi, has anyone tried a package for this? SQL-Ledger is a double entry accounting system. Accounting data is stored in a SQL database server, for the display any text or GUI browser can be used. The entire system is linked through a chart of accounts.

  • Preparing an iPad with Apple Configuratior

    I'm having some issues with preparing iPad with Configurator. The Macbook is cabled but when it starts the first part of "Downloading iOS 8.1.2" it works for about 15 minutes and then drops the connection with causes the preparing to fail. I think it