General MII environment questions

Hi,
We are just starting a project to implement MII.  We have an MII consultant on site who has stated that in addition to the stock MII install, we need an additional database at each plant for locally caching production orders, purchase orders, inventories (raw, wip, f/g, packaging & ingredients) and other support type data (storage locations, material masters, tolerance, item-pack mapping codes, etc.) in support of the continuous operation requirement.
Are additional databases a common requirement? 
If so, is dba level access for developers a requirement?  Please keep in mind, they would potentially also want the same access for the production system.
If dba access is not required, couldn't we just add a tablespace (we're using Oracle)?
What is a typical configuration?
The consultant is not familiar with NWDI.  His recommendation is to move directories from development environments to production.
Are most MII customers using NWDI including the Change Management Service?  If not, how are they controlling and coordinating changes?
It was my impression that some of the data was stored in files and some of the data is stored in the database.  Is that incorrect?
Can changes be safely migrated from development to production by copying directories?
Best regards,
Russ

Hi Russ,
I support Mike's comments. 
At one time all the MII application "articles" were text based and accessible via Windows Explorer.  Now with 12.0+ releases, everything exists in the DB as you are aware.  I'm not aware of anyone/customer/partner actually using NWDI with MII.  When things were file based, it made it a bit more easy to integrate with some flavor of source control.  But, even then, most projects did not use such tools for several reasons (albeit not necessarily good reasons).  Many projects are small enough in scope that one person is working on code at a time.  So the need for source control to prevent code contamination was not there.  It would be helpful for promotions or transports from dev->tst-prd though.  But, again, many projects used a possibly archaic method (ie, Excel) to keep track of what files and version of those files were where.  At best, when things were file based, source control was used as a nightly backup of files.  I could rattle off a number of multi-site, large enterprises that use Excel as their MII SVN tool.  Again, it may not be best, but it is a common observation.
Today, yes, MII does work with NWDI.  Is it the ultimate in source control/software configuration management?  Probably not.  Last year in Nashville for the SAP AMS conference, I gave a presentation on Software Configuration Management (SVN being a part of SCM) and MII.  I gave some recommendations for incorporating SCM into the MII world.  There are different options depending on your version of MII.  But, for some combinations of MII and components of SCM, I was not able to provide a solution.
I'm hoping to find a really, really good solution for complete SCM compliance in the MII architecture at some point, but I'm haven't yet.
Regarding local DBs.  Yes, if the requirement is limited sustainability or local survivability, then, yes, you will need a DB at each plant (at least one).  As far as authorizations for your consultant - well it depends on how much you want them to do on the DB side.  If you don't grant permissions, someone who has permissions will need to write the queries and stored procedures, obviously.  That's really your call.
Good luck with your MII endeavors.

Similar Messages

  • General Airport Extreme questions

    I have some general questions regarding the dual band technology and how it relates to my specific setup. A bit of background:
    I have 20Mbps cable internet and I am running the new Airport Extreme on a pc running XP to broadcast my wireless signal. I have an iPhone 3GS, iPhone 4 and iPad. I also have an Airport Express that I use in bridge mode so that I can connect my PS3 in a wired fashion.
    My questions:
    -How do wireless networks work when you have both "g" and "n" devices? (which I do)
    Does the router send out simultaneously a 2.4 and 5.0 signal for each capable device to pick up? Do the g devices pick up the 2.4 signal while the n devices pick up the 5.0? Or do all devices pick up the 2.4 signal since there are non "n" rated devices on the network? I have read many conflicting ideas on this.
    - I have spoken with multiple Apple customer service people and have gotten differing advice on how to improve my devices' connectivity to the network as I have been experiencing unacceptable levels of interference on all devices.
    One rep reccomended that in the Airport Utilities, under wireless options that I check the 5GHz box to setup a seperate 5GHZ network for the n devices. Another rep dismissed this idea.
    Does creating this new 5.0 network keep the network signal from being bumped down to "g" speeds when "g" rated devices are connected?
    How am I to better off setting my networks up so that my n devices as well as my g devices get the best performance and connectivity?
    -I use an Airport Express in bridge mode with my PS3 and it seems to be helping my connections speeds. When I was connected wirelessly, the speeds would either be non-existent or unacceptable. I know the PS3 has a "g" antenna. I have the airport express connected to the 5.0ghz wireless network in Airport Utilities. Am I actally now connecting to the 5.0ghz signal and converting to the PS3 at higher bandwidth then I would be otherwise if I was connected to the 2.4ghz wireless signal?
    I appologize for the longwinded post.
    Any insight I will be most appreciated as I am banging my head against the wall here.
    Scott

    The default setup for the AirPort Extreme provides a dual band wireless network with a Radio Mode that looks like this:
    802.11a/n --- 802.11n/g/b
    The 5 GHz band is on the left and the 2.4 GHz is on the right of the --- dashes
    Also as a default, the same wireless network name is used for both bands. The theory here is that any device will automatically connect to the best signal quality. This works great for older "g" devices because they can only connect to the 2.4 GHz band, so you know where they will be at all times. You never need to be concerned about a "g" device slowing down a faster "n" device on the 5 GHz band because the "g" device cannot even connect to the 5 GHz band.
    But note that newer "n" devices can connect to either the 5 GHz band or the 2.4 GHz band. So, it's possible that you may have a "n" device and a "g" device on the 2.4 GHz band and the "n" connection will slow a bit if the "g" device is really active.
    It gets a bit more complicated if you have a new iPhone, which can connect at "n" speeds, but only to the 2.4 GHz band. The new iPad can connect to either 2.4 GHz or 5 GHz.
    Most users are better off if they leave the settings as recommended on the AirPort Extreme. That's because each device will connect to the band with the best available signal quality. Isn't that what you really want?
    If you will concentrate on signal quality, which is a combination of signal strength and low noise, you'll be fine.
    Most problems arise when users think that they want to connect to the 5 GHz band because the speeds can be faster there. So, they assign a separate name to the 5 GHz network and then "point" their computer at that band. The potential issue with doing this is that 5 GHz signals are much weaker than 2.4 GHz signals.
    So, if the computer is several rooms away and you have "forced" it to connect to 5 GHz, you are likely telling it to connect to a signal that is both +weaker and slower+ than the 2.4 GHz signal at that location. The signal always slows down as it moves further from the router or encounters obstructions.
    As I said, most users will do well to use the default settings and let each device find the best connection automatically. With a mix of a number of devices, you'll never be able to find the single "perfect" setting. With wireless, there are always compromises. No way to avoid that, I'm afraid.

  • German Umlauts OK in Test Environment, Question Marks (??) in production

    Hi Sun Forums,
    I have a simple Java application that uses JFrame for a window, a JTextArea for console output. While running my application in test mode (that is, run locally within Eclipse development environment) the software properly handles all German Umlauts in the JTextArea (also using Log4J to write the same output to file-- that too is OK). In fact, the application is flawless from this perspective.
    However, when I deploy the application to multiple environments, the Umlauts are displayed as ??. Deployment is destined for Mac OS X (10.4/10.5) and Windows-based computers. (XP, Vista) with a requirement of Java 1.5 at the minimum.
    On the test computer (Mac OS X 10.5), the test environment is OK, but running the application as a runnable jar, german umlauts become question marks ??. I use Jar Bundler on Mac to produce an application object, and Launch4J to build a Windows executables.
    I am setting the default encoding to UTF-8 at the start of my app. Other international characters treated OK after deployment (e, a with accents). It seems to be localized to german umlaut type characters where the app fails.
    I have encoded my source files as UTF-8 in Eclipse. I am having a hard time understanding what the root cause is. I suspect it is the default encoding on the computer the software is running on. If this is true, then how do I force the application to honor german umlauts?
    Thanks very much,
    Ryan Allaby
    RA-CC.COM
    J2EE/Java Developer
    Edited by: RyanAllaby on Jul 10, 2009 2:50 PM

    So you start with a string called "input"; where did that come from? As far as we know, it could already have been corrupted. ByteBuffer inputBuffer = ByteBuffer.wrap( input.getBytes() ); Here you convert the string to to a byte array using the default encoding. You say you've set the default to UTF-8, but how do you know it worked on the customer's machine? When we advise you not to rely on the default encoding, we don't mean you should override that system property, we mean you should always specify the encoding in your code. There's a getBytes() method that lets you do that.
    CharBuffer data = utf8charset.decode( inputBuffer ); Now you decode the byte[] that you think is UTF-8, as UTF-8. If getBytes() did in fact encode the string as UTF-8, this is a wash; you just wasted a lot of time and ended up with the exact same string you started with. On the other hand, if getBytes() used something other than UTF-8, you've just created a load of garbage. ByteBuffer outputBuffer = iso88591charset.encode( data );Next you create yet another byte array, this time using the ISO-8859-1 encoding. If the string was valid to begin with, and the previous steps didn't corrupt it, there could be characters in it that can't be encoded in ISO-8859-1. Those characters will be lost.
    byte[] outputData = outputBuffer.array();
    return new String( outputData ); Finally, you decode the byte[] once more, this time using the default encoding. As with getBytes(), there's a String constructor that lets you specify the encoding, but it doesn't really matter. For the previous steps to have worked, the default had to be UTF-8. That means you have a byte[] that's encoded as ISO-8859-1 and you're decoding it as UTF-8. What's wrong with this picture?
    This whole sequence makes no sense anyway; at best, it's a huge waste of clock cycles. It looks like you're trying to change the encoding of the string, which is impossible. No matter what platform it runs on, Java always uses the same encoding for strings. That encoding is UTF-16, but you don't really need to know that. You should only have to deal with character encodings when your app communicates with something outside itself, like a network or a file system.
    What's the real problem you're trying to solve?

  • AutoVue for Agile PQM - Java Runtime Environment Question

    Greetings,
    An end user is attempting to view/open attachments within Agile Product Quality Management (PQM); we use Agile PQM 9.3.1.
    When clicking on an attachment file, a pop-up screen opens with a javascript alert, prompting the user with, "Please Install Java Runtime Environment." After clicking "OK", user is redirected to java download website. User downloads and installs the update.
    After rebooting the machine, we attempt to access the attachment file again within PQM and receive the same error message, "Please Install Java..."
    After installing the first java update, the user is now on the following java version:
    Java Plug-in 10.55.2.14
    Using JRE version 1.7.0_55-b14 Java HotSpot(TM) Client VM
    Do we need to install the second update to be able to view attachments within Agile PQM via AutoVue?
    I can't test the java updates myself because there are some legacy Oracle apps that I won't be able to support if I update my java.
    Thanks in advance,
    William

    You might want to post the question to the Agile forum
    The popup you see seems to be tied to Agile code itself and not AutoVue
    AutoVue will work on Java 7u55
    But you are running Agile, so you will need to make sure the rest of the apps are confirmed with 7u55
    You also need to review Java update guidelines, by default you will be always prompted to install the latest java update

  • Configuring our RAC environment Questions

    The environment consists of Sun Solaris 10, Veritas, and 10g RAC:
    Questions:
    I need to know the settings and configuration of the entire software stack that will be the foundation of the oracle RAC environment....Network configurations, settings and requirements for any networks including the rac network between servers
    How to set up the solaris 10k structures: what goes into the global zones, the containers, the resource groups, RBAC roles, SMF configuration, schedulers?
    Can we use zfs, and if so, what configuration, and what settings?
    In addition, these questions I need answers to:
    What I am looking for is:
    -- special hardware configuration issues, in particular the server rac interconnect. Do we need a hub, switch or crossover cables configured how.
    -- Operating System versions and configuration. If it is Solaris 10, then there are more specific requirements: how to handle smf, containers, kernel settings, IPMP, NTP, RBAC, SSH, etc.
    -- Disk layout on SAN, including a design for growth several years out: what are the file systems with the most contention, most use, command tag depth issues etc. (can send my questionnaire)
    -- Configuration settings\ best practices for Foundation suite for RAC and Volume manager
    -- How to test and Tune the Foundation suite settings for thru-put optimization. I can provide stats from the server and the san, but how do we coordinate that with the database.
    -- How to test RAC failover -- what items will be monitored for failover that need to be considered from the server perspective.
    -- How to test data guard failures and failover -- does system administration have to be prepared to help out at all?
    -- How to configure Netbackup --- backups

    Answering all these questions accurately and correctly for you implementation might be a bit much for a forum posting.
    First I'd recommend accessing the Oracle documentation on otn.oracle.com. This should get you the basics about what is supported for the environment your looking to set up, and go a long way to answering your detailed questions.
    Then I'd break this down into smaller sets of specific questions and try and get the RAC axters on the RAC forum to help out.
    See: Community Discussion Forums » Grid Computing » Real Application Clusters
    Finally Oracle Support via Metalink should be able to fill in any gaps int he documentation.
    Good luck on your project,
    Tony

  • Quick Environment Question

    It's time I finally delved into the Environment, but I'm a little nervous about messing things up. So here's my question:
    If I start Logic and save my Autoload song under either another name or as a project and then start twiddling about in the Environment, will the changes I make in the Environment be reflected the next time I open Logic, or do they only apply to this new saved "song" that I've just created?
    Sorry if that's a little naive, but I'm just wanting to be safe.
    Thanks!

    you'll be alright doing it that way. the environment is particular to a song - not changing any global prefernces or anything.
    go twiddle.
    check out vector faders. very cool. never used them though. main thing i use in environment is transformers. not cool - just useful.

  • Some general portal caching questions

    Hi experts,
    I have some general questions regarding caching functions in portal.
    1. In System administration->Navigation I can activate navigation cache. By default there are 3 connectors: Collaboration Connector, ROLES and gpn.
    I guess Collaboration Connector caches Collaboration Content and Roles caches the content of the Role-based navigation? Is that correct? What is gpn-connector?
    2. This cache does only cache the navigation structures? Not the iviews and the content?
    3. For some iViews and pages I can activate caching in PCD with certain cache levels. That caching is not related to navigation caching?
    4. I can't activate caching for web dynpro Java iviews and web dynpro java proxy pages. Is that corect? If not how can I achieve that. Those settings are deactivated for me, so I can't activate them.
    5. In Visual Admin I can activate navigation cache under com.sap.portal.prt.sapj2ee. Is this option related to the setting I can set under system administration->navigation in portal? Because I avtivated the option in portal but in VA it still showed it as not activated.
    I crawled some documentation but couldn't find exact information.
    Thanks and regards
    Manuel

    Hi,
    1. GPN is Guided Procedures Navigation connector
    2. Yes only Navigation nodes are cached (TopLevel and Detailed Navigation nodes)
    3. Here it is PCD Caching, which has nothing to do with Navigation caching
    4.  I never tried this, but It looks like what you say is true.
    5. What you see in VA is old caching mechanism. So this is obsolete and can be ignored.
        So you should only use the options from system administration->navigation
    Changes in the Navigation Cache
    Regards,
    Praveen Gudapati

  • Some General iPhone 4 questions

    We just bought my wife a used iPhone 4 and (a little late now) we have some questions.
    Generally the questions revolve around the issues the iPhone 4 had when they were first released - something about the antennae location. How was that resolved? and if so how?
    Is there a way to determine when an iPhone was manufactured or how old it is?
    I see in the forum 'refined' lineup there are iPhone 4 forums for both GSM and CDMA - where is that shown on the iPhone?
    The primary issue we are having is with signal strength. Since it's 'hers' I only hear about it so don't get much chance to see for myself.
    Thank you for any help

    roaminggnome wrote:
    The antennae issue was blown out of all proportion.  Many phones have similar issues.
    Most do not have the issue.
    I have never experienced this problem, nor has anyone that I personally know.
    Funny what the internet can do. (maybe not so funny)
    Thanks for the reply and 'vote of confidence'

  • General Macbook Pro questions...

    I am VERY new to the Mac world and had a few questions. Any answers are greatly appreciated!
    *The bottom of my macbook pro gets hot. I was told not to set it on my lap or another soft surface because it doesnt allow air in or out. My question is, I have bought a carbon fiber protective case for it. Is that ok to use or will that also prevent the flow of air?
    *Is it ok to use my macbook pro while it is pluged in? Meaning will it damage the battery in any way If I use it while it is charging?
    *Does any one have any tips on preventive maintence? I would just like to know is there anything I can do to keep my investment in top performance.
    Thanks for any answers!

    The laptop does get hot, more so with greater activity and using the higher-performance graphics cards. Generally-speaking, a protective case will impede some heat transfer and MAY cause overheating, though presumably the manufacturer of the protective case has tested it to see if there's a problem. It's not so much a matter of air-flow (there's no vents/fans there), but radiant dissipation. The aluminum unibody is very robust, so you may not want to use the case anyway.
    It's perfectly fine to use the computer when it's plugged in. It won't damage the battery in any way or reduce the battery life. However, it's good to, every once in a while, exercise the battery by using it up completely and then recharging (maybe every month or two).
    There's really no preventative maintenance required. The system should perform any system maintenance periodically and independently. The only thing you really want to look out for is that it's best if you never let your hard-disk get > 90% full.
    As for Time Machine, it's not "like a hard-drive". Time Machine is a versioning backup system. That is to say, it maintains a copy of your system and past versions of files on that system. If you delete a file (or e-mail message) by accident, you can recover it through Time Machine. When the user activates Time Machine, they see the current window filled with messages/icons and a bar indicating various points in time. You slide your mouse over the bar to select a particular point in time, and it restores your files to the way there were at that time. It's not a conventional backup solution (like mirroring the
    disk), but it does solve a problem. You might prefer more conventional backup solutions, such as mirroring the disk to create a bootable backup disk.
    You can use your old WD external 2TB drive from your Windows PC with your Mac. If you intend to use it exclusively with your Mac, I would suggest that you reformat it using the Mac filesystem (HFS+) and a GPT-type partition table (what Macs use, but also high-end PCs and servers).

  • General and specific questions on the applicability of Sun Studio 11

    Hi. In an e-mail letter from Sun Microsystems I read about Sun Studio 11 to "utilize its record-setting parallelizing compilers." From this message I was attracted by the possibility of adding something like parallel processing, not by changing the processor (hardware), but by adding Sun-Studio-11 software to a Linux operating system. Now I already have a Fortran compiler, the Intel Fortran Compiler for Linux, which is free and can handle Cray-style pointers, a feature hard to find in a free Fortran compiler.
    1a. So for the most basic of questions, without having parallel-processing hardware, just an ordinary processor [a 1-GigaHertz (GHz) Advanced MicroDevices Duron central processing unit, in my case], is it possible to have parallel processing and thereby increase one's computing speed by installing Sun Studio 11 in a Linux operating system?
    1b. If so, by what factor could one expect the speed of computation to increase over not having Sun Studio 11 installed? (If the gain in speed is dependent on the type of computations being performed, I imagine possibly using a Fortran code to perform numerical calculations using and perhaps searching for minima or maxima in a two-or-more-dimensional surface. So please give me an idea of the sort of gain in speed one could expect for these two types of activites, calculations using formulas and searches for minima and maxima among already-computed quantities.)
    1c. Again if so, how could one just by adding software have parallel processing without two or more hardware processors? In other words, what is the basic working principle of the software to make the simultaneous performance of multiple tasks (multitasking or parallel processing) possible?
    2a. Does Sun Studio 11 include a Fortan compiler?
    2b. If so, must one use it to have parallel processing with Sun Studio 11?
    2c. Or will the Intel Fortran Compiler for Linux work with Sun Studio 11 to have a parallel processing capability?
    Concerning hardware requirements I read that Sun Studio 11 requires a minimum of 512 MegaBytes (MB) of memory, presumably Random Access Memory (RAM). My Hewlett-Packard, ZE1110, Pavilion, notebook computer has 256 MB of RAM, but is expandable to a maximum of 512 MB of RAM. So in this respect it is in principle at least technically possible for me to meet the minimum system requirement for Sun Studio 11 with my computer, if I choose to increase its RAM. Somehow accommodating the cost of such a RAM addition, including whether one may have to buy two, matching, 256-MB RAM modules or just presumably one additional 256-RAM module, is another requirement. But before spending money for such an upgrade, one should first thoroughly investigate other matters to determine if other things are going to work and to determine what gain, if any, one could expect in computing speed with Sun Studio 11 and an additional 256 MB of RAM; then decide, based on such data, whether the purchase is personally worth the money or not. That's one motivation behind this posting; another motivation is for me to learn some things.
    Lastly I would like to here thank whoever was thoughtful enough to provide the Sun Download Manager (SDM) 2.0, which allows the pausing and resumption of the 207-MB download studio11-lin-x86.tar.bz2 for the Linux version of the Sun Studio 11! Using a slow, dialup, Internet connection like mine having a maximum speed of 28.8 kilobits/second, this makes it possible to download that file over a number of Internet sessions instead of having to have an uninterrupted, 19-or-more-hour Internet session. Besides the invconvenience of tying up one's telephone line for that long a time, it might be even be difficult to have such an uninterrupted Internet session for that long a time. I have at least started such a download using the SDM 2.0 potentially over multiple Internet sessions. Whether or not I carry it out to completion could depend on whether everything looks good with Sun Studio 11 for my particular situation. Thanks in advance for your help.

    Thanks for both of your postings here. I'm mostly trying to learn something here.
    From Maxim Kartashev: "For example, if one thread (or process, or lwp) frequently performs an I/O operation, then the other thread (process, lwp) can utilize processor resources to perform, say, some computations while first one waits for operation to complete."
    I think I might understand what you meant above. I guess lwp in the above context stands for light-weight process. And I think you may be talking about a potential gain in speed with just one, ordinary processor. I guess you meant that one program, or perhaps group of programs, could perform input/output processes at the same time it is performing calculations because different parts of the processor are being used in these two groups of processes. Then on "while first one waits for operation to complete" I guess you meant that if the input/output operations finish before the computations finish, then thread 1 that was performing the input/output operations will have to wait until the current computations ordered by thread 2 are complete before thread 1 can utilize the computational resources for its own computations; i.e., two threads can't use the same computational resources of an ordinary processor at the same time. How is my thinking so far, Maxim, right, partly right, or all wrong?
    Now if the above thinking of mine is right, then it appears that one could have some gain in speed doing things like you suggest with just one, ordinary processor. And if so, I imagine that the gain could be a maximum of a factor of two for a program that requires spending as much time in input and output as it does in computation; i.e., keeping both the computational and input/output resources working all of the time without the input/output resources waiting on the computational resources or vice versa. How is my thinking here?
    If the above thinking is correct, just for purposes of discussion with just one, ordinary processor, not a dual processor, and a program which does nothing but computations there would be no gain in speed using Sun Studio 11 and a Fortran compiler over not using Sun Studio 11. In other words, to increase the speed of computation one would have to buy a faster computer, buy parallel processing hardware for an existing computer and use parallel-processing software, or somehow figure out how to harness two or more computers to work for you at the same time with instructions from one piece or perhaps set of pieces of code set up for parallel processing using two or more different computers. The latter case would be a computer analogue or "two 'heads' are better than one," not human heads, but computers. How is my thinking here?
    Here I am still assuming that it is possible for one processor to be used to do two different kinds things at once. However, I don't see how one Fortran program could instruct two things to be done at once. This is because I have not seriously studied parallel processing, I suppose. That is I am used to a sequential set of instructions that proceed from top to botton down the lines of code; i.e., one instruction or line of code can't be executed until the line of code before it has been completely executed. That is the computing "world" with which I am familiar. So how about someone here teaching me with an example of parallel-processing Fortran code how parallel processing works, explaining what instruction or group of instructions tells the computer to execute input and computational instructions at the same time?
    Based on the encouraging information from one or more other people I have been able to use the Intel Fortran for Linux 8.1.024, if I remember correctly, in a computer with a 1-GigaHertz (GHz), Advanced MicroDevices (A.M.D.), Duron Processor. So this is at least one case where it is not essential to have an Intel processor to use the Intel Fortran Compiler for Linux 8.1.024.
    Is the Sun Fortran compiler free for personal use? And can it handle Cray-style pointers?

  • General Advice and Question on Saving Book Files

    I am still relatively new to FrameMaker. I attended a one on one course 12 months ago for two days. When I attended this course I was totally oblivious to the features of FM, it was totally foreign software. I had looked through the User Guide and as part of the course received Classroom in a Book. I have not spent a lot of time on FM in the last 12 months as I still have to produce manuals in the old format. The changeover is a huge project.
    I have searched this forum many times and found the answers I have needed to many of my questions and learnt so much that I had not fully understood.
    I would just like to confirm now with the users of this forum that I am going about my project in converting to FrameMaker in the right way and I have a question on saving books etc.
    I have Operator and Service manuals for four machines (with variations between each machine and then variations between each of those basic models). Parts of the Operator and Service manuals are interchangeable.
    I want to be able to single source my material and I am setting up an Excel spreadsheet which will breakdown each section of both manuals. Once completed when a machine is ordered the Excel spreadsheet will produce the details of the files needed to complete a book (manual). There will still be work needed on the book files before the manual can be produced.
    My files are double sided with each Chapter starting on the right hand page and I am using variables and conditional text. The manuals contain columns, graphics, tables etc.
    I have done test runs and have managed to accomplish the layout in PDF that I want for the final manuals which will be printed. I am only working on four machines at this stage, once this is set up others will be added or I may have to set up another lot of single source files.
    Am I on the right track?
    My specific question is that currently I use master files for each machine which are updated as needed. These files are then saved to another folder and identified by a Serial Number and Customer. The serialised files are then saved to PDF. I would like to continue this process as in the future the manuals for that specific machine may need to be revised (i.e. Rev. 0 to Rev. 1 and so on).
    So basically I have all my single source files from which I produce a book file and save it to an appropriate folder which identifies its Serial Number and Customer. I don’t know if I have missed something very simple but I want these serialised books to remain as is, I do not want them updated when I make a change to my single source documents. I want to be able to produce my manuals for any number of machines from my single source without changing the book files that have already produced a manual. Is this possible?
    I am using FrameMaker 7.2 b144.
    I hope this posting provides a clear view of what I am trying to achieve.
    Many thanks in advance for any advice and answers that are provided.
    PS This is the first time I have ever posted to any forums.

    It sounds like you want to be able to open the book files and print without updating...
    This isn't the case. Each book will need to be updated before output.
    The reason is that your pagecount (and thus any toc, index, and xref's) will be different in each book and thus will need to be  updated.
    The books will still need a opening and updating prior to output, but you could easily (cheaply) script that process so that you could batch output the books.
    -Matt
    Matt Sullivan
    director of training
    roundpeg, inc.
    http://blogs.roundpeg.com
    http://twitter.com/mattrsullivan

  • EJB environment question (static helper classes)

    We're using JBoss as AS containing several stateless session beans.
    Now, we have certain helper classes that are abstract and contain static methods. Is this a problem for the EJBs? All of them use these helper classes all over their methods. Are they sharing the static class and will slow down somehow? Or is each EJB using its version of the class and can run concurrently?
    Should we rethink this and put an INSTANCE of each helper class in each ejb instead of using static methods in the helper class?
    Now in EJB method:
    Helper.calculateStuff();
    Should it be?
    Helper h = new Helper(); // defined when ejb is created
    helper.calculateStuff();
    Edited by: JAeon on Sep 8, 2008 12:21 AM
    Edited by: JAeon on Sep 8, 2008 12:22 AM

    >
    The helper methods do database querries etc and return results that the EJB sends onwards to clients. If these methods
    are NOT synchronized (and the ejbs share the static class) won't it cause concurrency errors? I think most of our methods are not
    synchronized (and it doesn't seem to cause any concurrency errors so far... though the system have not beeen stressed test that much,
    and concurrency bugs tends to pop up later and randomly :P).
    >
    No, if you dont have any static data variables in the Java classes, static method as such will not cause concurrency errors, and the methods should not be synchronized.
    If you have any synchronized methods and they take a while to execute, that could become a bottleneck in itself, because different threads waiting for each other,
    so make sure you dont have any synchronized methods where it is not explicitly needed.
    Think of a static method (without static data in the class being manipulated) as a plain function in another programming-language.
    >
    We have some scaleability problems with the EJBs... It seems as if they do not run concurrently. If we do a stress test with several threads calling the EJBs their response time increases by a too large factor to feel comfortable...
    >
    Apparently, you do have a some scaling/concurrency problem, which could have many causes -- transaction locking and clashes in the database, poorly configured database, network congestion, problems in the EJB architecture, etc -- can be many reasons...
    The general idea to debug, is first to find out exactly what calls in your code that take longest time to execute (profiling, logging, System.out.println's are useful) when you put parallel load on your system -- rather than just seeing "the whole application seems slow" -- from there you can move on, "divide&conquer" the problem, etc...

  • General JPA query question

    Hello world,
    I'm new to JPA 2.0 and there are few things I don't understand.
    BTW: I can't figure out the keywords to search for this question, so please pardon me if it's one of the most asked.
    Using the Preview, I've seen that alignment went straight to Hell, so I tried to make this as readable as I could using pipes in place of white spaces in the result sets.
    I have a couple of tables:
    CUST table (for customers):
    CUST_ID (pk, integer)
    CUST_NAME (varchar)
    ORD table (for orders):
    ORD_ID (pk, integer)
    ORD_STATUS (char) can be: N for new, S for shipped, D for delivered
    CUST_ID (fk, integer)
    The relationship is, of course, a "one to many" (every customer can place many orders).
    Content of the tables:
    CUST_ID|CUST_NAME
    1|elcaro
    2|tfosorcim
    3|elppa
    ORD_ID|ORD_STATUS|CUST_ID
    2|N|1
    3|N|1
    4|N|1
    5|S|1
    6|S|1
    7|D|1
    8|D|1
    9|D|1
    10|D|2
    11|N|2
    12|S|3
    13|S|3
    Here's how I annotated my classes:
    Customer.java:
    @Entity(name = "Customer")
    @Table(name = "CUST")
    public class Customer implements Serializable
    private static final long serialVersionUID = 1L;
    @Id
    @Column(name = "CUST_ID")
    private Integer id;
    @Column(name = "CUST_NAME")
    private String name;
    @OneToMany(mappedBy = "customer")
    private List<Order> orders;
    // Default constructor, getters and setters (no annotations on these)
    Order.java:
    @Entity(name = "Order")
    @Table(name = "ORD")
    public class Order implements Serializable
    private static final long serialVersionUID = 1L;
    @Id
    @Column(name = "ORD_ID")
    private Integer id;
    @Column(name = "ORD_STATUS")
    private Character status;
    @ManyToOne
    @JoinColumns
    @JoinColumn(name = "CUST_ID", referencedColumnName = "CUST_ID")
    private Customer customer;
    // Default constructor, getters and setters (no annotations on these)
    Everything works just fine, the following JPQL query yields the results I expected:
    select c from Customer c
    it returns three objects of type Customer, each of which contains the orders that belong to that customer.
    But now, I want to extract the list of customers that have orders in status 'N', along with the associated orders (only the status 'N' orders, of course).
    Back in the good ol' days I would have written an SQL query like this:
    select c.cust_id, c.cust_name, o.ord_id, o.ord_status
    from cust c
    inner join ord o on (o.cust_id = c.cust_id)
    where o.ord_status = 'N'
    and it would have returned the following result set:
    CUST_ID|CUST_NAME|ORD_ID|ORD_STATUS
    1|elcaro|2|N
    1|elcaro|3|N
    1|elcaro|4|N
    2|tfosorcim|11|N
    The following JPQL query, however, doesn't yield the expected results:
    select distinct c from Customer c join c.orders o where o.status = 'N'
    it returns the correct set of customers (customer 'elppa' doesn't have any status 'N' order and is correctly excluded), but each customer contains the full set of orders, regardless of the status.
    It seems that the 'where' clause is only evaluated to determine which set of customers has to be extracted and then the persistence provider starts to navigate the relationship to extract the full set of orders.
    Thinking a little about it, I must admit that it makes sense.
    I then tried out another JPQL query:
    select c, o from Customer c join c.orders o where o.status = 'N'
    this JPA query yields results that are similar to the ones produced by the previous SQL query: each result (4 results as expected) is a 2-object array, the first object is of type Customer and the second object is of type Order. But, again, the objects of type Customer contain the full set of related orders (as I expected, this time). Not to mention the fact that now the orders are not contained in the Customer objects, but are returned separately, just as in an SQL result set.
    Now the question is:
    Is it possible to write a JPA query that filters out, not only the customers that don't have an order in status 'N', but the related orders (fetched during relationship navigation) that are not in status 'N' as well?
    What I'd like to be able to get is a 2-customer result where each customer contains only its status 'N' orders.
    I read the Java EE 6 Tutorial and one of the examples (the Order Application) has a schema that is similar to mine, but I couldn't find a query like this (in the downloaded source code).
    Although I think the above is standard behavior, I use an Oracle Weblogic 12c server (through its Eclipse adapter) and the persistence provider appears to be EclipseLink.
    Thanks in advance.
    Best regards,
    Stefano
    Edited by: user11265230 on 17-apr-2012 14.11

    Hello,
    When returning an entity from JPQL, it gives you the entity as it is in the database. Your "select distinct c from Customer c join c.orders o where o.status = 'N'" is asking for all customers that have an order with a status of 'N', so that is what it gives you. There is no condition to filter anything on the relationship when building the Customer object in JPA - doing so would mean returning a managed entity that does not reflect what is in the database. This would affect other queries, since JPA requires that queries return the same instance of an entity regardless of the query that is used to bring it back. So a query using your "where o.status = 'N'" would cause conflicting results when used with a query using "where o.status = 'Y'". And these queries would make the EntityManager unable to determine what has changed on the returned objects.
    EclipseLink does have the ability to filter over relationships, it is just not available through standard JPA and I would strongly discourage it. Instead of querying for Customers, why not change the query to get Orders instead -
    "select o from Customer c join c.orders o where o.status = 'N'". Assuming Orders have a ManyToOne back reference to their Customer, this will mean you do not need to travers the Customer-> order relationship. If using
    "select c, o from Customer c join c.orders o where o.status = 'N'"
    I am not sure why you would use the orders from the returned customers instead of the orders returned in the results though.
    You could also return "select c.id, c.name, o.id, o.status from Customer c join c.orders o where o.status = 'N'" which is the equivalent of what you would get from the SQL you initially posted.
    Regards,
    Chris

  • Order type RK in New general ledger environment

    Hi all,
    When creating credit note for pricing error using RK order type. Cost of sales are being posted which it shouldn't be. May I know what causes it please and what is the fix?
    Any advice are welcome.
    Francis

    Hi Francis,
    the correct place to post this question is [SD Sales|SAP ERP SD Sales;.
    BPX Forum is purely meant to discuss different business processes and issues surrounding that.
    Best Regards
    Sadhu Kishore

  • General Data source question

    Hi,
       I am a beginner. I want to know a information about standard Data source.
    Say for example: DS - 0FI_AR_3, how may time can this data source be used. or they can be used only once for one target?
    Question may be silly, but i am trying to know.
    Thanks.

    http://help.sap.com/saphelp_nw04/helpdata/en/70/10e73a86e99c77e10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/70/10e73a86e99c77e10000000a114084/frameset.htm
    It loads - > 0FIAR_O03 - ODS FIAR: Line Item - > DSO Loads - >0FIAR_C05 - Cube FIAR: Line Item 
    A datasource can be used to update any num of datatargets...
    Message was edited by:
            Jr Roberto

Maybe you are looking for