Is IDM the right solution as an embbedded component in an ISV software ?

Hello Forum,
I would like to consult with you.
Our product consists of frontend web based management software & backend service layer.
The frontend needs to have strong User Identity management as it is exposed to the outside world.
The summary of the requirements says:
* Password Policy.
* Reset password function.
* Admin GUI embedded into current web mng GUI.
We currently use oracle table to store acountId, password and all other relevant information. In the ERD diagram lots of table entities is dependent on accountId.
Now, I am considering using IDM first as an Identity Management solution. From reading the manual it comes clear that IDM must work with an LDAP backend. Is this requirement a must? Can I adapt it to work with our current DB table scheme?
Also, would you recommend IDM as an "embedded" solution being installed with each installation of the product?
Thank you,
Maxim.

Hi Vwdansh,
generally SD modules are taken by MBA or related professionals because of having a domain knowledge in Sales and Distribution area, so your choice is good, however there are some more functional module exits in SAP which can be taken as an career option like CRM etc, for details about the major SAP functional module you can search on Google on in SCN.
When it comes to taking a training , you can opt for weekends classes as it is a suitable option for a working proffessional, the concern is that while having a online training , you may face some problem in which online suggestion will not provide you any fruitful meanings as your learnings, so a classroom training should be prefferable.
As you have working experiences so i will suggest you to take training from any authorised training center, ask the councellor that you wants weekends classes so that your present work will not get interrupted.
Finally, SD module is a right choice for you but once you should search some more major module that may suits you, because in SD there are so many people already working, but its not the case with CRM.
If you have any further query, please revert.

Similar Messages

  • Is DPS the right solution for me?

    I am currently working on a project that includes an iPad app that will direct users to one of 30 brochures. Other developers are working on the iPad app, I am creating the brochures.
    If i use the DPS to publish the brochures, do I need to create 30 custom viewer apps (at $495 a month/$6,000 a year), or is there a more cost-effective way to deliver the brochures? The brochures will be created in Indesign, and I would like them to have some interactive features (image slide-shows, pinch & zoom, etc.) but they have to be viewable on an iPad.

    Well, a DPS professional license would allow you to publish all 30 of those brochures in one app. But that would mean that all your brochures are available to anyone who downloads the app, so depending on your specific needs that could cause a problem.
    I love doing brochures through DPS, but if you already have developers working on the front and back ends of an app, I don't know if you want to replace their work with DPS.
    But long story short, with the limited info you have provided, it sounds like DPS could possibly be the right solution for you.

  • Is Workspace Manager the right solution for history data only

    I've been experimenting some with WM to just enable history on all tables in a application we have with about 100 tables.
    Everything works fine, except for the fact that we have lots of fk_constraints that require enabling of versioning in order.
    We know that changes to the Db layout will occur as the app develops with business requirement's.
    My questions is : Is WM the right solution here, as I don't see anyway to "transport" historical data in the event of a migration of old DB layout to a new layout?
    (db refactoring).
    /christian

    Hi Christian,
    When you execute enableversioning on the tables, you can specify the complete set of tables that you want to version, instead of having to maintain the correct table order.
    There shouldn't be a need to transport the data between tables. We support DDL changes on versioned tables by using the beginDDL/commitDDL procedures. If you do require the transport of data between databases, then note that we only support a full database import/export. The user guide covers both of these topics in greater detail.
    Regards,
    Ben

  • Is Oracle Text the right solution for this need of a specific search!

    Hi ,
    We are on Oracle 11.2.0.2 on Solaris 10. We have the need to be able to do search on data that are having diacritical marks and we should be able to do the serach ignoring this diacritical marks. That is the requirement. Now I got to hear that Oracle Text has a preference called BASIC_LEXER which can bypass the diacritical marks and so solely due to this feature I implemented Oracle Text and just for this diacritical search and no other need.
    I mean I set up preference like this:
      ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
      ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    With this I set up like this:
    CREATE TABLE TEXT_TEST
      NAME  VARCHAR2(255 BYTE)
    --created Oracle Text index
    CREATE INDEX TEXT_TEST_IDX1 ON TEXT_TEST
    (NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    --sample data to illustrate the problem
    Insert into TEXT_TEST
       (NAME)
    Values
       ('muller');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('müller');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('MULLER');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('MÜLLER');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('PAUL HERNANDEZ');
    Insert into TEXT_TEST
       (NAME)
    Values
       ('CHRISTOPHER Phil');
    COMMIT;
    --Now there is an alternative solution that is there,  instead of thee Oracle Text which is just a plain function given below (and it seems to work neat for my simple need of removing diacritical characters effect in search)
    --I need to evaluate which is better given my specific needs -the function below or Oracle Text.
    CREATE OR REPLACE FUNCTION remove_dia(p_value IN VARCHAR2, p_doUpper IN VARCHAR2 := 'Y')
    RETURN VARCHAR2 DETERMINISTIC
    IS
    OUTPUT_STR VARCHAR2(4000);
    begin
    IF (p_doUpper = 'Y') THEN
       OUTPUT_STR := UPPER(p_value);
    ELSE
       OUTPUT_STR := p_value;
    END IF;
    OUTPUT_STR := TRANSLATE(OUTPUT_STR,'ÀÁÂÃÄÅÇÈÉÊËÌÍÎÏÑÒÓÔÕÖØÙÚÛÜÝàáâãäåçèéêëìíîïñòóôõöøùúûüýÿ', 'AAAAAACEEEEIIIINOOOOOOUUUUYaaaaaaceeeeiiiinoooooouuuuyy');
    RETURN (OUTPUT_STR);
    end;
    --now I query for which name stats with  a P%:
    --Below query gets me unexpected result of one row as I am using Oracle Text where each word is parsed for search using CONTAINS...
    SQL> select * from text_test where contains(name,'P%')>0;
    NAME
    PAUL HERNANDEZ
    CHRISTOPHER Phil
    --Below query gets me the right and expected result of one row...
    SQL> select * from text_test where name like 'P%';
    NAME
    PAUL HERNANDEZ
    --Below query gets me the right and expected result of one row...
    SQL>  select * from text_test where remove_dia(name) like remove_dia('P%');
    NAME
    PAUL HERNANDEZMy entire need was only to be able to do a search that bypasses diacritical characters. To implement Oracle Text for that reason, I am wondering if that was the right choice! More so when I am now finding that the functionality of LIKE is not available in Oracle Text - the Oracle text search are based on tokens or words and they are different from output of the LIKE operator. So may be should I have just used a simple function like below and used that for my purpose instead of using Oracle Text:
    This function (remove_dia) just removes the diacritical characters and may be for my need this is all that is needed. Can someone help to review that given my need I am better of not using Oracle Text? I need to continue using the functionality of Like operator and also need to bypass diacritical characters so the simple function that I have meets my need whereas Oracle Text causes a change in behaviour of search queries.
    Thanks,
    OrauserN

    If all you need is LIKE functionality and you do not need any of the complex search capabilities of Oracle Text, then I would not use Oracle Text. I would create a function-based index on your name column that uses your function that removes the diacritical marks, so that your searches will be faster. Please see the demonstration below.
    SCOTT@orcl_11gR2> CREATE TABLE TEXT_TEST
      2    (NAME  VARCHAR2(255 BYTE))
      3  /
    Table created.
    SCOTT@orcl_11gR2> Insert all
      2  into TEXT_TEST (NAME) Values ('muller')
      3  into TEXT_TEST (NAME) Values ('müller')
      4  into TEXT_TEST (NAME) Values ('MULLER')
      5  into TEXT_TEST (NAME) Values ('MÜLLER')
      6  into TEXT_TEST (NAME) Values ('PAUL HERNANDEZ')
      7  into TEXT_TEST (NAME) Values ('CHRISTOPHER Phil')
      8  select * from dual
      9  /
    6 rows created.
    SCOTT@orcl_11gR2> CREATE OR REPLACE FUNCTION remove_dia
      2    (p_value   IN VARCHAR2,
      3       p_doUpper IN VARCHAR2 := 'Y')
      4    RETURN VARCHAR2 DETERMINISTIC
      5  IS
      6    OUTPUT_STR VARCHAR2(4000);
      7  begin
      8    IF (p_doUpper = 'Y') THEN
      9        OUTPUT_STR := UPPER(p_value);
    10    ELSE
    11        OUTPUT_STR := p_value;
    12    END IF;
    13    RETURN
    14        TRANSLATE
    15          (OUTPUT_STR,
    16           'ÀÁÂÃÄÅÇÈÉÊËÌÍÎÏÑÒÓÔÕÖØÙÚÛÜÝàáâãäåçèéêëìíîïñòóôõöøùúûüýÿ',
    17           'AAAAAACEEEEIIIINOOOOOOUUUUYaaaaaaceeeeiiiinoooooouuuuyy');
    18  end;
    19  /
    Function created.
    SCOTT@orcl_11gR2> show errors
    No errors.
    SCOTT@orcl_11gR2> CREATE INDEX text_test_remove_dia_name
      2  ON text_test (remove_dia (name))
      3  /
    Index created.
    SCOTT@orcl_11gR2> set autotrace on explain
    SCOTT@orcl_11gR2> select * from text_test
      2  where  remove_dia (name) like remove_dia ('mü%')
      3  /
    NAME
    muller
    müller
    MULLER
    MÜLLER
    4 rows selected.
    Execution Plan
    Plan hash value: 3139591283
    | Id  | Operation                   | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                           |     1 |  2131 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| TEXT_TEST                 |     1 |  2131 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | TEXT_TEST_REMOVE_DIA_NAME |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("SCOTT"."REMOVE_DIA"("NAME") LIKE "REMOVE_DIA"('mü%'))
           filter("SCOTT"."REMOVE_DIA"("NAME") LIKE "REMOVE_DIA"('mü%'))
    Note
       - dynamic sampling used for this statement (level=2)
    SCOTT@orcl_11gR2> select * from text_test
      2  where  remove_dia (name) like remove_dia ('P%')
      3  /
    NAME
    PAUL HERNANDEZ
    1 row selected.
    Execution Plan
    Plan hash value: 3139591283
    | Id  | Operation                   | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |                           |     1 |  2131 |     2   (0)| 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| TEXT_TEST                 |     1 |  2131 |     2   (0)| 00:00:01 |
    |*  2 |   INDEX RANGE SCAN          | TEXT_TEST_REMOVE_DIA_NAME |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("SCOTT"."REMOVE_DIA"("NAME") LIKE "REMOVE_DIA"('P%'))
           filter("SCOTT"."REMOVE_DIA"("NAME") LIKE "REMOVE_DIA"('P%'))
    Note
       - dynamic sampling used for this statement (level=2)
    SCOTT@orcl_11gR2>

  • Is JavaFX the right solution for this scenario...

    Hi,
    Is JavaFX the right choice for the following implementation choice I have to make? (see below for the requirement)
    Requirements:
    1. Provide a way to visualise within a web application an entity relationship type diagram (i.e. nodes with relationships between them). The backend database will hold the topology relationship. So to visualise this on a web application will need the ability to draw rectangles/lines/text etc.
    2. Provide a way to allow the use to trigger "add a new node", or "edit info in this node". For example a right hand context sensitive menu for example.
    3. Ideally will scale as the user resizes the browser window
    4. Would like the main functionality of the application to remain web based (is a Ruby on Rails application in fact), but have the visualization of the diagram render within the web application as transparently as possible.
    Options / Issues:
    * Issues I've struck with some investigation I've done is that whilst the <canvas> tag looks good for Mozilla/Firefox etc, it does not seem to have support on InternetExplorer. Hence cross-browser compatibility seems to be a real issue for the JavaScript type solutions from what I can see. This is why I thought JavaFX may be good?
    * Options therefore seem to me to be:
    - javascript (e.g. <canvas> tag) => cross-platform issue
    - JavaFX / Applet => (this is what I'm asking about)
    - Microsoft => costs $$ for development environment etc
    - AIR / Flex / Flex => ??? costs $$ again I think
    Regards
    Greg

    thanks - I'm still a little confused re their products and which would fit best so I've sent them some questions (below if you're interested)
    Hello,
    Could you please assist me in understanding which of your products would satisfy my needs. In fact (a) whether JGraph itself would and if not, or if it's not ideal, (b) which other product would.
    REQUIREMENTS:
    1. Provide a way to visualise within a web application a connectivity type diagram (i.e. nodes with relationships between them, a network connectively type of diagram).
    2. The server side (i.e. web application with database) will hold the topology relationship. HTTP type interfaces off the web application can be developed to provide the client side visualizing component with the topology data in the required format (assume this is XML)
    3. As well as just visualizing in the browser there would need to be a way for user to trigger a context sensitive "add a new node", or "edit info in this node". For example a right hand context sensitive menu for example.
    4. Ideally the diagram will scale as the user resizes the browser window
    5. Would like the main functionality of the application to remain web based , but have the visualization of the diagram render within the web application as transparently as possible. The the visualizing component would just take topology data and intelligently display this.
    6. DESIRABLE: Basic automated layout would be nice, or as a desirable (depending on cost) more sophisticated auto-mated layout.
    QUESTIONS:
    As well as your recommendation re which product would suite I had some specific questions which I would appreciate clarification on:
    Q1 - I assume if I have a web backend that can deliver topology inforrmation in an appropriate XML format via a HTTP REST type GET call that this could be used as a the source of data for a jGraph visualisation running within an Applet?
    Q2 - If running within an Applet, can jGraph cater for a right hand menu option off the nodes/links on the graph, that I could use to trigger other calls back to the backend? (e.g. to trigger an Add New Node call)
    Q3 - Following on from Q2 scenario, if I trigger an add new node scenario, if I wanted to visualise the form to type in the attributes for the new node, could this be handled within the applet by jGraph, or would this be a case of just adding your own Swing based dialogs to handle this?
    Q4 - Do the basic JGraph do any basic layout without having to go up to the layout Pro package (which I think costs if using it commercially).
    Q5 - If the answer to Q4 is No, how difficult would it be using the base JGraph library to do a basic layout? Is this doable/recommended? i.e. how would one "layout" the diagram if using only the base JGraph component? (noting from my requirements I'm really after a component I could send my topology information to in XML form and have it just visualise it for me)
    Q6 - Running the visualiation in an Applet in a browser, is the typical usage one where all changes to topology are made as calls to backend? i.e. or is there an approach where one would allow users to make changes to the topology within the applet and build up all the changes here on the client, and then at some point synch these back to the backend? (I'm assuming the keep it simple approach would be not to do this)
    Q7 - Is there a sample application/project with source code that implements a JGraph in applet/browser talking to web backend for data?
    Q8 - How does JGraphPro & mXGraph fit into the picture re solving my requirements in the most cost effective manner

  • Is Structured FM12 the right solution for this problem?

    I've been tasked with solving a tricky problem at my office. Briefly: We've got a technical manual that needs to be printed each time we sell a piece of equipment. Currently, the manual is produced using a combination of MS Access DB and a convoluted Word doc that uses Mail Merge to pull data from the Access DB into the appropriate fields. The DB has a hundred tables or so, and some of the tables are "calculated" values - i.e. the value in the fields is calculated based on values entered in other tables within the DB. The Word doc uses some kind of "if-then-else" logic to determine which tables to use for which fields when building the doc. This whole setup is currently maintained by the engineering, sales, and marketing departments.
    We currently use FM11 (unstructured) in the Technical Writing department, and my boss is asking me to figure out a way to migrate the Access/Word doc described above to a format we can use so we can take ownership of the documentation process for this particular line of equipment. I suspect the variables involved here go way beyond what we can do with conditional text and variables within FM, but I'm wondering if Structured FM (either 11 or 12) is more suited to this project, either by using some sort of conduit between FM and an SQL DB, or directly within FM using conditinal text, variables, or some other organizational function present in either FM11 or FM12.
    Any guidance here would be appreciated. I'm not even sure what questions to ask at this point to get a proper shove in the right direction.

    I love this line: Get that SQL queries into XML directly or into CSV and transform the CSV into XML via XSLT. Reminds me of that bit in "Good Morning, Vietnam" where Robin Williams goes through the routine about the Vice President: "Well, sir, since the VP is such a VIP, shouldn't we keep his ETA on the QT? Otherwise, he might get KIA and then we'd all be on KP." And now, back to work...
    I'm going to try to answer all of the questions above, in order, and add a little info where appropriate.
    TW dept may or may not take over all maintenance of the doc. That's one of the recommendations I'm tasked with providing. My current thinking is, engineering should remain in charge of entering relevant tool data to the data repository, sales should provide TW with a "spec sheet" of what was sold, and TW should then use the specs to "build" the document for customer release.
    Will a DB still be used? Unknown. That seems the best way to catalog and access the data, but I'm open to alternatives if there are any that make sense.
    I am totally unfamiliar with structure, XML, DITA, HTML, etc. Literally 100% of my experience with FM has been in the unstructured workspace. This whole structured FM inquiry was inspired by a blurb in my "Unstructured FrameMaker 11" book (from p474: "If you need to use complex combinations of condition tags, consider setting up a workflow in the structured interface of FrameMaker. Structured documents allow you to create versions based on attributes, which are easier to use and more powerful than expressions.") A quick Google of this blurb didn't turn up much useful information, but this seems to jive with Lynne's input above re: attributes.
    Data is not currently in SQL - it's in Access DB. We can migrate to SQL if necessary - that's one of the answers I'm supposed to be providing.
    The reason this is all currently being done in Word is because that's what the Sales & Engineering departments understand, and currently they're responsible for these docs. I'm sure this started out as a simple, nerdy solution to a small problem - some engineer didn't want to maintain two separate Word docs when he could create a database and then use mail merge to automagically build the same thing. Since then, it's grown to hundreds of tables and thousands of possible permutations.
    We already have FrameMaker installations in the department. If we can do this with FM11, great, but if not, no big deal - boss has already said he wants to move to FM12 as soon as possible. In other words, purchasing software - unless it's something additional needed to make all this work - isn't really a budgetary consideration.
    As mentioned, I have no skills with using FM for any kind of structured project, but I'm willing to learn if it seems like a good solution. Just need to figure out how to proceed.
    Thanks for your input - looking forward to hearing what else you folks have to say.

  • Is the Snow Leopard Mac Mini Server the right solution for my office?

    I'm the de facto "sysadmin" for my small office, which usually just means I set up the wireless, configure network printing, troubleshoot little issues with Mail and MS Office products.
    Currently, we have 4 employees all on iMacs. We share files through a slapped-together setup, where there is a public folder on our owner's iMac and we all share files there. There are a few problems with this:
    - If the owner's computer is off, no-one can get to the shared files.
    - The owner's computer has had some strange "permissions" issues so sometimes files in the "Public" shared folder end up being read-only, or "read & write" for "nobody".
    - A 5th employee telecommutes on an iMac, and can't access the shared folder or files.
    So, we're considering getting a Mac Mini Server to do file storage and sharing, both locally and with telecommuting employees (of which there may be more in the future).
    - Is this the best solution to our needs - really just file sharing, no web hosting or anything like that?
    - What level of access control / authentication can we do on the Server? For example, could we have a password protected folder on the server to restrict access?
    - Would we need to upgrade our standard DSL service if we want to share files on the server with folks not on the local network?
    - Am I biting off more than I can chew here, given that my technical knowledge is slim but I am the most computer-literate of anyone in the office, so I will need to trouble-shoot any issues that come up with the server?

    For your stated goal, network-attached storage (NAS) or an always-on Mac client would be a simpler solution. Either preferably with RAID, and with provisions and storage for periodic archives.
    A Mac OS X Server box is overkill. The Mac client boxes have 10-client sharing.
    If you want single-signon and shared directory services and mail and web and various of the other pieces and services that are available within, then you can grow into a Mac OS X Server box.
    A server is rather more to manage, regardless of what you choose. You're getting DNS and networking and other core pieces, minimally, and you're also responsible for many of the configuration settings and services and details that a client box receives from a server box. And you're definitely dealing with protections and such across multiple boxes.
    For some other perspectives, there are various previous discussions of this posted around the forums. A search that includes NAS should kick over a few of these; this is a typical low-end alternative to running a server.

  • SQL Cluster 2014 installation error the following is the right solution ? if yes why ? if not what are the pros n const of this ?

    http://www.bradg.co.za/?p=12
    SQL 2008 R2 Install Permissions
    SQL 2008 R2 is a little tricky to install if you don’t have the right permissions.  If the account you are using does not have the correct permissions you will find yourself getting loads of strange errors such as below during the installation.
    not all privileges or groups referenced are assigned to the caller
    The process does not possess the ‘SeSecurityPrivilege’ privilege which is required for this operation
    The error will prompt you to retry or cancel.  If you cancel your SQL installation will fail and if you retry you will be prompted with the error again.
    The way to resolve this issue is to grant the account you are using the correct privileges.
    Local Administrator on the box you are installing SQL
    Goto > Local Security Policy > User Rights Management
    Back up files and directories
    Debug Programs
    Manage auditing and Security log
    Restore files and directories
    Take ownership of files or other objects
    Add your installation user account to the above User Rights Management
    Once this has taken affect you should be able to install SQL without any issues.

    I might render OK in browsers, but the code is riddled with errors.
    http://validator.w3.org/check?verbose=1&uri=http%3A%2F%2Fwww.incredibleart.org%2Flinks%2Fa rtists_black.html
    This one looks like it could be your red herring.
      Line 111, Column 83: cannot generate system identifier for general entity "f.defaultView.getComputedStyle"
    Nancy O.

  • How is the right solution?

    I have an simple application that send / read mail, so I have a jsp for write and send the new mail.
    When I read a mail, I could replay to it; in this case I have to fill the body and the sender with the data of the original mail.
    The question is I could use the page already write for sending the new messages or generate a page by a bean: which is the best solution?
    In case of using a jsp page used for a new mail, how can I fill the body and the sender field? In a jsp I have this lines:
    <input type="text" name="dest" size="50" />
    so I think is not possible fill dinamically the "value", isn't it?
    Many thanks in advance for any idea.

    There is a "value" attribute for most input tags. You can read the details in any HTML reference guide.
    There is no reason to use the same JSP for both read and write. Write requires a form. The read doesn't. Just use two JSPs so each does its specific job. Cleaner that way.

  • Deciding whether TestStand is the right solution for my company

    I'm working at a growing company and we recently started a push in our Test group to try and standardize our future development as much as possible to allow as much re-use of code as possible.  I don't want to say exactly what it is we make for confidentiality reasons, but let's just say that 90% of what we make is the same basic type of product, but it comes in literally hundreds of models ranging in size from about the size of a dime to the size of a small washing machine.  We're considering switching to TestStand on all new stations and on all updates of previous stations to assist in rapid development with increased efficiency.
    Over the years we've made about 20 or so test stations as new versions of our product came out, each able to fit products of a different size and with different test requirements, but because our main products are all so similar to each other, there's only a pool of about 6 different types of tests we need to run, with some products needing only one of those tests and others needing all 6, and plenty in the 2-5 range.  The size differences among different products also mean we have a large amount of different power supplies, and for various other reasons the measurement devices for the test stations aren't terribly standardized as well.
    Step 1 was that we standardized the database we used to store all of the test data.  We now have one database that can store data from all of our test stations in the same way.  A few of the old stations had to have some creative workarounds done (i.e. instead of rewriting their current test program, write something that takes their old output format and converts it to the new database format, then upload it), but this step is 100% complete.
    Step 2 was to abstract out the most common pieces of hardware, so we have started basically putting the IVI types in a LVOOP wrapper.   Now instead of being limited to interchangability among IVI devices we can also swap in any of the other assorted devices we've accumulated over the years, many of which don't have IVI drivers and even a few that don't follow SCPI standards, as this is a great use of inheritance and overrides.  We're also implementing a few device types that don't have IVI classes.  This effort is already well underway and we're OK here.
    Step 3 is where we're at now.  As we standardized on hardware interfacing, we also figured it would be a good idea to attempt to effectively write each of the 6 or so tests one last time each with very flexible setup parameters as inputs, using the abstracted hardware classes so that it wouldn't matter which hardware we had, so that all the tests would run the same way.  So we started looking at solutions to some form of sequence management, and we came up with a couple of possibilities for homegrown versions but after debating those for a while we started to ask ourselves, "Are we just re-inventing TestStand here?"
    We use TestStand on a few stations here already so we had licences for development, but the stations we use it on at the moment fall into the 10% of outliers that we have and are somewhat non-standard.  So none of the 6-ish usual tests were used on them and they're for a very different product type, so we never tried to use them with our standardized database.  They also were all made before I came on board here, so I don't have much experience with TestStand myself, and I've run into some roadblocks when trying to come up with a framework for how to integrate our standard database and our standard instrument LVOOP classes into a TestStand environment that meets our current and future test requirements.
    The first roadblock is that our database standardization and TestStand's built-in reporting don't mesh at all.  Right now, all of our test stations use the same "mega-cluster" to store their data in, which is a type-def cluster of all the things our products could ever be tested for on any of our stations.  We pass it to a common database write function that writes everything that has changed from the default values (NaN and blank strings\arrays) and ignores the rest, so we don't fill the database with zeroes for the tests we don't do.  I do want to emphasize how big this cluster got in order to have all of the potential save values we might ever need in it.  It is a cluster of 13 other clusters, the largest of which has 10 items in it, and one of those is a 1-D array of another cluster with 19 elements in it, and one of those elements is another 1-d array with a cluster of 24 units in it.  If all of the controls in the mega-cluster are shown on the front panel at once, I can't fit them on 2 full-size 1080p screens at the same time.  The context help for the main cluster is 5 screen heights tall just for the listing of all of the elements.  
    So I really can't see this being integrated with the built-in TestStand reporting system.  Additionally, the "Type" system in TestStand appears to be unhelpful when attempting to pass this cluster between steps, and even worse when trying to pass it between sequences.  When I add the mega-cluster to it, it also creates over 30 other types, one for each of the other clusters contained inside of it.  Within LabVIEW alone this is pretty easy to manage, as we just have a centralized repository of typedefs in source control that gets checked out to a common directory on each computer, as we do minor changes to acommodate new products a few times a year.  However, on TestStand, I can't find a method that's the equivalent for just storing a bunch of .ctl files in the same shared directory like we do with LabVIEW.  I see that they can be stored in certain INI files or in sequence files.  The INI files appear to all be station-specific, and since all of our 20ish test stations have different hardware and capabilities, we can't just synchronize those files.  And if I want to pass the mega-cluster type between sequence files that call sub-sequence files, every one of those files will need to be manually updated to get their types to match... and we'lll be needing at least 200 sequence files when we're done, at a guess.  Unless there's some method I'm missing here, I don't see how to use TestStand to move this data around.  We could use something like just passing it by reference, but then if we do that I don't actually see much of a benefit here of using TestStand instead of just pure LabVIEW as two of the things that TestStand is supposed to handle (reporting steps as pass/fail against limits and reporting and logging the results afterwards).  Plus then we have the additional burder of making sure the module holding the reference in LabVIEW doesn't unload before we're done using it.
    Is there some better way of handling massively complex types in TestStand that I am not seeing?
    There may be more that I'm not seeing yet, but if there isn't a way to handle these massively complex types in TestStand, all we'd be getting out of it is a slightly easier method of managing sequencing than we have now, which I don't see as being worth the investment in time and licenses it would take.
    I'd be grateful for any feedback.  Thanks!

    Considering you already seem to be leveraging LabVIEW heavily and before jumping into TestStand I would look at the PTP Sequencer. This seems to be a cutdown version of TesStand but inside LabVIEW and is a simple install from the LabVIEW Tools network using VIPM.
    If you are happy to look at alternative representations then maybe check out NI's StateChart Module or the Open StateChart by Vogel (available as an install from VIPM).
    The advantages of any of the above is they will be alot cheaper than TestStand and the required run time licences if you don't require all the power of TestStand (extensible everything).

  • HT204053 Somehow the very first email I had years ago that I don't even remember the password for, keeps coming up as my apple ID, when I want to make any purchase on ITunes. Why won't my real ID come up, and I can't find the right solution?

    Does anyone know how to eliminate a mistaken apple ID that you still use as an email
    but for some reason, now when I go to make a purchase at ITunes, it  brings up this email
    as my apple ID instead of my actual Id and it will not take the current password I have for
    this email. Somehow I believe it is confused, because years ago this was my first email, and
    it had a different password way back then, that I no longer remember.
    I am very sorry if this all sounds confusing, but my Mac book has been out of commission for the
    past year, and I have been recently trying to work on it, and I am not suave on the computer at all, so
    if my next question sounds lame, you'll know why... I wondered if that may have caused any of my
    latest hitches, only because when I first purchased Mac, 4 years ago, this rogue email was the one
    I signed up with everything on my Mac, but I have not been using it for the past 9 months, just my
    iPhone, until that died 3 months ago, and then I got this new I phone 5 2 months ago, thinking my
    problems would be..., a thing of the past,??, for at least a little while??? But NO, they have not,
    However, I'm sticking to the problem at hand, I have a full functioning apple ID that works and at
    some occasional times, ITunes would bring the correct ID up. And only once in a while bring the
    Bogus one up. But in the last few weeks, all it asks for now is the wrong ID and my frustration has
    reached its max!!' I have missed out on purchasing two apps I needed, and some music I wanted,
    and I don't think I'll be able to even use ITunes at all until I can know how to make my devices realize
    that my email is an email, not a apple ID, and then make it accept my real ID!!!
    Without having to eliminate, or delete my email address I still use, and receive mail
    at. Please forgive this lengthy explanation!!!
    I was only trying to make sense, if I confused you
    more, I'm very sorry... Thank you for any brave ones that tackle this, and actually have some
    Solutions for me.., I thank you much indeed!!  Sincerely  Kim

    Contact iTunes Customer Service and request assistance
    Use this Link  >  Apple  Support  iTunes Store  Contact

  • Is mpd the right solution for me?[more questions!]

    My scenario is as follows:
    I'm at work all day, but I want to listen to my music at home. I have complete access to SSH to my computer at home, I can open ports and all that from here.
    When I was running windows, I could just TS into my box, and hit play on my media player and it would bring the audio to my work machine. It was a big laggy, but ok.
    MPD seems like it could be a great solution.
    Can I set up MPD to run on my machine at home, and then connect with a client on my windows machine at work to it? Is that how it works?
    Or would I have to setup a shoutcast?
    What do you guys do at for music at work? or are you more productive than that?
    thanks!
    Last edited by cschep (2007-04-04 21:46:23)

    Thanks for following up guys. I ended up going with Icecast + MPD and it works a TREAT. It is exactly what I was looking for.
    Is there anyway to configure it to be an .mp3 stream instead of .ogg? No real need, just wondered.
    Also, is icecast setup to accept more than one user at a time by default? or does that take a lot of tweaking? I tried to have a friend connect real fast to mine while i was on and the "peak clients" went to 2, but he never got connected.
    Maybe my upstream just wasn't fast enough..
    oh! Also... maybe I should ask in another thread, but as long as we're talking about audio and .ogg and stuff..
    Does anyone know of a good batch converter from .mp3 to .ogg? I think I'm going to make the switch to a fully open standard.
    I have a few cd's that I purchased from the apple store as well, I don't want to break forum rules by talking about breaking drm, but I could just burn then and rip them to .ogg, or does anyone know of a way to convert those straight to .ogg?
    Again, if that's not a question that should be discussed, feel free to ignore/chastise me.
    thanks guys!
    oh, TU: great burglar joke, i should keep a directory full of gunfire and other related movie sounds to scare the piss out of people at random.
    Last edited by cschep (2007-04-04 21:47:23)

  • Can Anyone can help me to find the right solutions?

    How can I solve this problem?
    Process:         Adobe Illustrator [843]
    Path:            /Applications/Adobe Illustrator CC 2014/Adobe Illustrator.app/Contents/MacOS/Adobe Illustrator
    Identifier:      com.adobe.illustrator
    Version:         ???
    Code Type:       X86-64 (Native)
    Parent Process:  launchd [145]
    Responsible:     Adobe Illustrator [843]
    User ID:         501
    Date/Time:       2014-08-08 22:19:02.603 +0200
    OS Version:      Mac OS X 10.9.4 (13E28)
    Report Version:  11
    Anonymous UUID:  8722A45D-6A10-3163-FAC5-7588D879B59F
    Crashed Thread:  0
    Exception Type:  EXC_BREAKPOINT (SIGTRAP)
    Exception Codes: 0x0000000000000002, 0x0000000000000000
    Application Specific Information:
    dyld: launch, loading dependent libraries
    Dyld Error Message:
      Library not loaded: @executable_path/../Frameworks/amtlib.framework/Versions/A/amtlib
      Referenced from: /Applications/Adobe Illustrator CC 2014/Adobe Illustrator.app/Contents/MacOS/Adobe Illustrator
      Reason: image not found
    Binary Images:
        0x7fff61cfd000 -     0x7fff61d30817  dyld (239.4) <042C4CED-6FB2-3B1C-948B-CAF2EE3B9F7A> /usr/lib/dyld
        0x7fff8cccd000 -     0x7fff8cd39fff  com.apple.framework.IOKit (2.0.1 - 907.100.12) <9C518309-DB33-3636-B1CE-416CECB633A8> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit
        0x7fff8f04e000 -     0x7fff8fbc4ff7  com.apple.AppKit (6.9 - 1265.21) <9DC13B27-841D-3839-93B2-3EDE66157BDE> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit
        0x7fff94afd000 -     0x7fff94afdfff  com.apple.Cocoa (6.8 - 20) <E90E99D7-A425-3301-A025-D9E0CD11918E> /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa
        0x7fff94f6a000 -     0x7fff94f6afff  com.apple.CoreServices (59 - 59) <7A697B5E-F179-30DF-93F2-8B503CEEEFD5> /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices
        0x7fff95a5c000 -     0x7fff95d5afff  com.apple.Foundation (6.9 - 1056.13) <2EE9AB07-3EA0-37D3-B407-4A520F2CB497> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation
        0x7fff9632d000 -     0x7fff9632dfff  com.apple.ApplicationServices (48 - 48) <3E3F01A8-314D-378F-835E-9CC4F8820031> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices
        0x7fff96e4a000 -     0x7fff96e4afff  com.apple.Carbon (154 - 157) <EFC1A1C0-CB07-395A-B038-CFA2E71D3E69> /System/Library/Frameworks/Carbon.framework/Versions/A/Carbon
        0x7fff9a3d0000 -     0x7fff9a524ff3  com.apple.audio.toolbox.AudioToolbox (1.10 - 1.10) <69B273E8-5A8E-3FC7-B807-C16B657662FE> /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox
        0x7fff9a8cb000 -     0x7fff9a92effb  com.apple.SystemConfiguration (1.13.1 - 1.13.1) <9910E760-B2B0-3C66-9494-3C8072EDB61C> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration
    Model: iMac14,4, BootROM IM144.0179.B03, 2 processors, Intel Core i5, 1.4 GHz, 8 GB, SMC 2.21f88
    Graphics: Intel HD Graphics 5000, Intel HD Graphics 5000, Built-In
    Memory Module: BANK 0/DIMM0, 4 GB, DDR3, 1600 MHz, 0x80AD, 0x483943434E4E4E424C54414C41522D4E5444
    Memory Module: BANK 1/DIMM0, 4 GB, DDR3, 1600 MHz, 0x80AD, 0x483943434E4E4E424C54414C41522D4E5444
    AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x111), Broadcom BCM43xx 1.0 (6.30.223.154.65)
    Bluetooth: Version 4.2.6f1 14216, 3 services, 23 devices, 1 incoming serial ports
    Network Service: Wi-Fi, AirPort, en1
    Serial ATA Device: APPLE HDD HTS545050A7E362, 500,11 GB
    USB Device: BRCM20702 Hub
    USB Device: Bluetooth USB Host Controller
    USB Device: FaceTime HD Camera (Built-in)
    Thunderbolt Bus: iMac, Apple Inc., 30.8

    Hi san
    I have gone to se38 then typed LogonGroup* and pressed f4,then I got a Transaction INST_LOGONGROUP but that sets a logon value ALL-IN-ONE <sid> as the name of the logon group .its hard coded in it.
    What I need is a generalized function module that accepta the logon name as parameter and sets the value
    When we go to the transaction smlg we can see a screen where we set the logon group value.Whatever value we give it sets that value.How to find out what is the function module corresponding to that?
    I have debugged it but unable to track the function module.:-((
    Kindly help me out if you can..

  • Is Distribute the right solution?

    I am working on a Mac OS X Version 10.6.7
    I am working on a non for profit scholarship fund website for a client. I have designed a Scholarship Application document.pdf with fields to fill out. I have added extend features to the form so someone can fill it out, save it to their computer if needed and submit it. 
    I have created a link on the Scholarship Fund webpage where they can open the scholarship document.pdf. I have also created a mailto: link on the Scholarship Fund webpage that will open their email handler. Then, when done filling out the fields, they can attach the scholarship.pdf to the email that will be sent to the Scholarship Committee for review.
    Here are the issues I am having:
    1. It is my understanding, in order to use Distribute you have to email the document.pdf directly to the person you want to fill it out. You cannot have a random person open the document from a random website and submit the form when it is completed correct?
    2. My client would like to have the student manually sign (not digitally) the document.pdf. That is, fill the fields out on the computer, print it off, sign it in pen, and scan it back into the computer. This is why I added the mailto: link on the website so the student could submit the manually signed document.pdf to the committee. 
    3. My client would also like the student to submit transcripts and various other documents to the committee for review. It is my understanding that you cannot add additional documents.pdf to another document.pdf after you have distributed it. Again that is why I have created the mailto: link on the website for submission.
    My client likes the unanimity of using Adobe Distribute, and we want to handle this in the most professional, seamless and proficient way. Based on all of our issues I am not sure that Distribute is the best, if viable, option for us.  I'm open to any ideas.
    You can see the beta version I have prepared at http://www.mattguise.com/brianewagner/scholarship.html

    There doesn't seem to be anything that can't be done in your post.
    BUT (and this is a big one), you more than likely will be breaking the terms of your license by using an enabled form for this purpose.
    Read section 15.12.3 of the End User License Agreement before you proceed or make any promises.
    The basic rundown is, you can only collect data 500 times for one of these forms. That includes them filling it in and sending it to you as a PDF, filling it in, signing and scanning it as a PDF, printing it out, filling it in and mailing it to you...etc.
    You'll want to be very sure you are following the letter of the agreement on this. A few agencies in my state that did not have been in court over it.

  • Is JMS based solution the right one?

    Hello,
    I'm doing some research on possible solutions to a process that is currently being completed semi manually and semi automated. The process itself consists of feeding several thousand ID numbers to a legacy C++ application, which in turn, after some processing of its own, sends more data to an external system. It is automated in that there is a batch file that kicks off the C++ application. The C++ application opens up a text file, reads an ID number and then processes it. It repeats this for as many IDs as there are in the text file. However, it is manual in that there are several text files, broken up into 500 IDs per file (which we create), that are sent over several weeks. The reason for this is a limitation in the external system that we have no control over. Its 500 IDs per day, no questions asked. Because of this limitation, we have to go in and run the batch file for each of those text files. In addition, we have to monitor as each text file is processed in case of a failure. If a failure occurs, we have to go into the text file, delete all of the IDs that were successfully processed, and then run the batch process again.
    At any rate, as much as we would like to, we are unable to rip the current system out and replace it all together. The legacy C++ component must remain intact. I am looking at possibly modifying the C++ application to pick up the ID numbers from an external application--instead of a text file--that keeps count of the number of IDs processed in a given day. I'd like to continue logging error messages, but instead of crapping out the entire process, I'd like for them to be ignored and for the process to continue until the 500 ID threshhold is reached for the day. I'm not too familiar with JMS technology, but from what I have read a JMS based external application may be a good candidate, but I'm thinking it may be overkill. I do like the fact that it is reliable, loosely coupled, and it's asynchronous. So I guess my question is if a JMS based application is the right solution for this problem? Or is it overkill?
    Thanks in advance for the assistance.

    hi java esse,
    i think you can acomplish your goals through using JMS.
    BUT if you just do this "simple" task with it i would not do it since it takes some time to get used to handling the JMS servers etc.
    if you consider your actual refactoring just as a starting point for many applications, and more to come this use case could be a good one.
    in general my only technical fear for this scenario would be that you might store some messages a reallly really long time maybe some weeks since your limit is 500 / day. this might be a porblem when you put some other applications on the JMS broker that have really high throughput.
    is it 500 successfull a day or 500 tries a day?
    regards chris

Maybe you are looking for