Best approach to attach note to decision via extended notifications?

Hi,
I'm looking for advice as to the best approach to handle this requirement. We are using extended notifications to deliver a user decision task to users' Outlook inbox. Users want to be able to attach a note after they select one of the decision links in the email. Currently if they click any option, they just get the standard page that says "Work item executed with decision option (whatever)".  There is no feature in that page for attaching a note.
We want to keep this as simple as possible from the users' standpoint. Currently these users are not using the UWL, and they don't use SAPGUI very much.  The need to attach a note will be infrequent, so the procedure needs to be obvious and simple, or the feature won't be useful. 
The SAP system is ECC 6.0 with Netweaver 7.01. The immediate application is purchase requisition rejection (we want to be able to attach a note when rejecting), but I am hoping for a generic solution.
I'm thinking we have to create a substitute page that gets called when the link gets clicked. Will that work? But is there an easier way?
Your thoughts?
Thanks,
Margaret

Just use the pen tool (p) and set the type to shape. The achor points can be selected with the direct select tool (a).

Similar Messages

  • Issue with Chinese Translations not picked up in Extended Notifications

    All Gurus,
    We are on ECC 6.0 with the below settings.
    SAP_BASIS     700     0015     SAPKB70015     SAP Basis Component
    SAP_ABA     700     0015     SAPKA70015     Cross-Application Component
    We have been using Extended Notifications for quite some time and it's working fine in EN. We now are working on generating the Outlook mails in Chinese Language. In these, there are some std. SAP texts (as mentioned in "General Settings" in transaction SWNCONFIG) are not getting picked up even though the corresponding Chinese translation exists. Specifically, the EN text
    "The following new work items require processing XXXXXXXX" corresponds to Dialog text "SWN_PROLOG_MULTI" maintained for parameter "TEXT_PROLOG_MULTI" in General settings.
    When the Outlook mail is generated in Chinese, the corresponding Chinese text is not getting picked up and says "Text Not Found". Similar issue existed for some other Dialog texts maintained for parameters "TEXT_GOTO_WI" and "TEXT_CONTACT_ADMIN" for which SAP suggested new Notes 1661893 and 1661894. For these texts, once Chinese text is maintained in SE63, its getting picked up in the Outlook mail.
    Has anybody faced similar issue with Chinese translations in Extend.Notif ? If yes, please let me know how to fix this issue.
    Also, anybody knows the message / Dialog text # for the text "Execute Work item.SAP" in the Link that is attached to the Outlook mail ?. Please tell me how to maintain the Chinese text for this EN text.
    Timely reply is really appreciated.
    Thanks in advance & Regards,
    venu

    All,
    SAP provided the translations in the Notes 1675156 & 1675157 and it resolved both the issues mentioned. Also, the "Execute Work Item.SAP" link is picked up from the Text Elements of the class "CL_SWN_NOTIF_WORKFLOW".
    As the issue is resolved, i'm closing this thread.
    Thanks
    venu

  • PDF attachment not getting sent via email. This occurs intermittently.

    Hi All,
    There is a concurrent program which sends report on Purchase Orders. The report is sent as a PDF attachment via email.
    The report is getting attached sometimes and not getting attached sometimes giving an empty email. Checked the log files and output files of the program and couldnt find any issue. There is no specific time/PO/vendor pattern wherein the program fails.
    Please let me know what needs to be done to get this resolved.
    Regards,
    Radhika.

    Hi,
    I am facing this issue with Oracle 11i - US Purchasing SuperUser responsiblity.
    There is a concurrent program which generates a report and then sends the report to printer. The report is not getting stored in a temp file in any directory. It is directly emailed as a PDF attachment.
    Sometimes the attachment is getting sent while sometimes it is not getting attached.
    Please let me know if any other information is required.
    Regards,
    Radhika.

  • Best Approach: All IDocs in one software component or not

    Hi All
    We are in the process of importing IDocs into PI system. We are maintaining software components based on legacy system.
    We have two oprions to imports idocs into software components
    1. Import all idocs into common software component (In this case i am thinking with respect to transports)
    2. Import to respective legacy software component
    Could you please suggest best approach?
    Thanks
    Sai

    Thanks prateek.
    Actually we have some idocs common across sofwtware components. Thats why we are planning go for common software component which can maintain all ECC objects.
    If we maintain like this, is there any problem in test and production mainly during transports- dependency from one SWC to common SWC.
    Thanks
    Sai

  • Best approach -To create RTF template having more than 50 tables.

    Hi All,
    Need your help.I am new to BI publisher. Currently we are using BIP 11g.
    I want to develop.rtf template having lots of layout and images.
    Data is coming from different tables (example : pulling from around 40 tables). When i tried to pull data from 5 tables by joining tables. It takes more time using data model in BI publisher 11g saved in xml and used in word doc.
    Could you please suggest best approach  weather i need to develop .rtf template via data model or query to generate a report.
    Also please suggest / guide me .
    Regards & Thanks in advance.

    it's very specific requirements
    first of all it's relate to logic behind
    as example 50 tables are related ? or 50 independent tables ? or may be 5 related and another independent ?
    based on relation of tables you create sql statement(s)
    how many sql statement(s) you'll have lead to identify ways to get data, as example, by package or trigger etc
    kim size of resulting select statement(s)
    if size say 1mb it's must be fast to get report but for 1000mb it can consume many time
    also kim what time it's not only to select data but to merge data and template
    looks like experimenting and knowing full logic of report is only ways to get needed output in projection of data and time

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • Best approach to develop DataTypes

    Hey guys
    suppose i m doin a complex File scenario in which i have a flat file either on the sender or receiver side.what is the best approach to develop the data type for the file structure,is it possible to generate an XML and with the help of that create a data type?
    I actually want to know the professional way of generating datatypes (i m pretty sure in real world scenarios,we get very complex file structures).
    thanx
    Ahmad

    Hi,
    In many cases, if your structure is very complex, you can not get direct nested xml after content conversion. In that case, in the mapping we need to handle the generation of nested strutcure. So you use Java or xslt mapping etc.. if it is not possible via graphical mapping. Also you can do this in the adapter module.
    Here you go with good example- Generic Structure-
    /people/sravya.talanki2/blog/2005/08/16/configuring-generic-sender-file-cc-adapter
    Also file content conversion - limitations-
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50061bd9-e56e-2910-3495-c5faa652b710
    Rgds,
    Moorthy

  • What's Best Approach for Multitrack Classical Music?

    Can someone suggest the best approach for recording classical musicians onto
    four tracks? In this scenario, they play until they make a mistake on, say,
    measure 24, stop, then (take 2) go back to measure 20 and play until the next
    rough spot, and so on. Ultimately there may be 15 takes that all need to be
    trimmed and stitched together.
    In the old (tape) days, this was pretty basic editing. I would use a blade and block
    to cut out all the bad stuff on the multitrack tape, then I could mix. But how do I
    do this in Audition? (I use version 1.5.)
    I can't do the cuts it in edit view because the tracks would get out of sync
    Assuming all the takes are in one session, in multitrack view, this most basic of
    functions seems to elude me. What am I missing?

    Al the Drifter wrote:
    If you follow Steve's advice, and after doing the edits you discover
    that one instrument should come up 1db, you are screwed.
    I could be wrong about this in the classical music environment,
    where things are not close-mic'ed but if I am, I am confident Steve
    will correct me.  Ha.
    You always run the risk of small changes between takes - and that's where Audition 3 and the new improved crossfades score rather heavily. You won't notice 1dB on a single instrument across a fade though - it's hard to spot this as a jump, even, unless it's on pure tone. No, I very rarely close-mic stuff at all, although I did with a clavichord recently - it's seriously too quiet to mic any other way.
    jaypea500 wrote:
     when recording classical music, any engineer worth anything has the mix down pat as it's being recorded. 
    That's the way they used to work, certainly - but not nowadays, especially if it's done on location, which most classical recording is. What's more likely to happen is that you'd use decent mic preamps feeding straight into a multitrack, or even some software on a laptop. I generally record like that - but I also feed the multitrack outputs to a Yamaha mixer via ADAT, do a mix on that and record it back to a spare multitrack pair. I don't actually need to do that - but having a mix available from the multitrack that's pretty much there is good as far as being able to play back takes to conductors is concerned.
    Of course, one of the other reasons that classical sessions recorded on location aren't mixed on the spot is that the monitoring conditions are invariably far from ideal, and I'd have it that no engineer worth anything would ever risk a final mix done on location.
    But I only get paid to do all of this on a regular basis, so what would I know? Must be something though - my customers come back for more...

  • Best approach to implement this requirement in OIM

    Hi experts
    We have an application(say App1) in which we use GTC DB connector to provision users from OIM. But due to some limitations in App1, the communication has to be changed from GTC to custom webservices one.
    For this we have done the following configurations
    1. Introduced new mandatory fields to same process form of App1(as per the req.)
    2. Cleaned all the GTC adapters in the process def of App1 and attached the webservice related adapters.
    So after introducing webservices, the known issues could be
    1. Revoke user fails for existing users since the existing users are already provisioned using GTC conn.
    The reason is, by default if we revoke a user then the revoke user task contains newly created Webservices adapter which will not complete the operation saying that mandatory values are missing for the fields in process form.
    2. Same is the case for Disable/Enable users.
    what would be the best approach to perform the above said operations for existing users who are already provisioned.
    we are not willing to go for creating new Resource Object for webservices communication.
    Any valuable pointers are highly appreciated.

    Hi
    I am trying to run the FVC utility to update the process form fields for old users.
    The FVC properties file looks like
    fvc.properties
    ResourceObject;GTCConnector
    FormName;UD_GTCFORM
    FromVersion;v1.0
    ToVersion;v3.0
    Parent;UD_GTCFORM_DUMMYFIELD;Default;Update
    The field UD_GTCFORM_DUMMYFIELD has been created in latest active version v3.0. I am trying to update some value to older versions of the process form
    When I run the script fvcutil_websphere.cmd, it is throwing me error message below.
    WSCL0100E: Exception received: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:79)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:618)
    at com.ibm.ws.client.applicationclient.launchClient.createContainerAndLa
    unchApp(launchClient.java:747)
    at com.ibm.ws.client.applicationclient.launchClient.main(launchClient.ja
    va:469)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:79)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:618)
    at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:183)
    at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:90)
    at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:72)
    at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformAct
    ivator.java:226)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja
    va:376)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja
    va:163)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:79)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:618)
    at org.eclipse.core.launcher.Main.invokeFramework(Main.java:334)
    at org.eclipse.core.launcher.Main.basicRun(Main.java:278)
    at org.eclipse.core.launcher.Main.run(Main.java:973)
    at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.jav
    a:245)
    at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:73)
    at com.ibm.websphere.client.applicationclient.launchClient.main(launchCl
    ient.java:238)
    Caused by: java.util.NoSuchElementException
    at java.util.StringTokenizer.nextToken(StringTokenizer.java:347)
    at com.thortech.xl.util.fvcutil.FVCUtil.initialize(Unknown Source)
    at com.thortech.xl.util.fvcutil.FVCUtil.<init>(Unknown Source)
    at com.thortech.xl.util.fvcutil.FVCUtil.main(Unknown Source)
    ... 26 more
    Does any one came across this type of error.
    Thanks in Advance

  • Best approach for performing DMLs using stored procedures

    Hi,
    I have a really general question and would like to hear your say about this.
    I want my application to manipulate or read data using stored procedures (or packages in that manner) and not directly using queries against the DB.
    Let's say I have a table with many columns:
    create table test (pkid number(10),col1 varchar2(30), col2 number(10), col3 date,...);For such a DML procedure, is it best to do something like
    procedure do_update(i_pkid IN number,i_col1 IN varchar2, i_col2 IN number, i_col3 IN date,...) as
    begin
       update test
       set col1=i_col1,
            col2=i_col2,
            col3=i_col3...
       where pkid=i_pkid;
       commit;
    end;Or do a selective update, meaning update only a certain column every time, given only 1 column actually changes? (and how to do that - separate procedures for each column? [columns can be nulls])
    Also, is it better to work with test.col1%type instead of specifying the data type in the procedure?
    And one last question - If I have a table with 100 columns and I don't want to create a procedure with 100 parameters - the best approach would be to use a record?
    I just need to be set on the way I start implementing things in order to do it well from the start.
    Many thanks.
    Edited by: Pyrocks on Nov 17, 2010 1:58 PM

    Pyrocks wrote:
    One last clarification (although it may be more related to c/c++ developers - maybe one of you will know):
    We are working with C++ and VB against a SQLServer and my part is to translate all the existing procedures to Oracle in order to migrate the application to work with Oracle DB.
    The existing procedures use an IN parameter for each column in the table and I would really like to use rowtype like you mentioned.
    Since I'm not a c/c++/vb developer - is there an easy way of working with such types, or even User-Defined Types, from c/c++/vb (as in passing a rowtype record from c++ to a SP ?)
    I'm not looking for the actual way - just want to know how hard it would be and how much code needs to be changed in order to be able to convince the developers that this is the RIGHT way to work.
    PS. none of our developers have experience with ORACLE so they wouldn't know the answer...Not actually an Oracle server-side (SQL language or PL/SQL language) question - but a client one. And it has been a long time since I wrote a fat client using C/C++ or Delphi.
    The OCI (<i>Oracle Call Interface</i>) supports advance (user defined) SQL data types. Has since Oracle 8i. So in that respect, yes the client can support custom SQL data types.
    How well it does depends entirely on that client language's features wrt OCI integration. For example, Delphi 4 was release around Oracle 8i and supported custom SQL types. I would expect that most languages today (like Java and C#) will provide support for it.
    As for usiong +%ROWTYPE+ - this is a PL/SQL clause as far as I know. Unsure whether it is supported by the OCI. What could support it is a pre-compiler like Pro*C. These enable you to mix pseudo SQL source code with client language source code. The pre-compilation step then replaces the pseudo SQL code with native client language calls to the OCI. The code is then compiled by that client language's compiler. Pre-compilers can pull all kinds of interesting "tricks" with their pseudo SQL code support.
    The best would be to consult the applicable client language's manuals that describe the interface it supports (via OCI) to Oracle.

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • Best approach to publish new table or new column on existing table on MDW?

    Hi,
    I'm refering to Olite R3 without any patches. I don't use Java API, I use MDW.
    if I have a new table or a new column on a existing table, what's the best approach to publish it?
    I'm asking this because I've trying lots of approaches and the only solution was, step-by-step:
    1) On MDW, drop the publication item
    2) Add again the publication item
    3) Associate the publication item to the publication
    4) Save everything
    5) File / Deploy (if I don't do it, it does not work)
    6) Tools/Package... (that's where it's a problem: if I don't remove the app and create it again it does not work!)
    7) on the client side, I perform a msync with "force refresh"
    That's the only way I found to publish new items for sure. Any other action does not push the new table or new column to the client's embbeded DB.
    Any comments?
    Regards,
    Maurício Américo Vernaschi.

    I do not use MDW, rather a mix of java and the final publish step you use, but
    Adding new PIs should be easy, just add them and re-publish (no need to drop anything)
    for changes, if you just have new columns and the sql statement is 'select * from' then you should just need to make the changes in the base schema objects, and run the publish with no changes and the updates should be picked up. If selecting specific columns, then update and re-publish.
    When using MDW at the end you can save the application as a jar file, and then use this jar file to publish in the mobile manager - this is the best wayto publish.
    Have a look at this jar file in winzip, and you will find it contains a web.xml file. This is the xml definition of the publication items, and for simple changes it is possible to just edit this file and republish via the mobile manager

  • Best approach to develop office add-in

    Hello,
    i'm a .net programmer and i've developed addins for outlook/ word and excel with VS 2008 + .net 3.5 + VSTO.
    There were many problem initially with the VSTO addin and i faced lot of difficulties to solve them specially when working with both Outlook and Word.
    My client has Adobe Acrobat installed on every machine and no doubt the performance of Acrobat addin in office application is superb. On the other hand VSTO addins take time to load and the performance is specially low when tehre is a interaction between VB6 and MS Office application.
    Many times the Outlook addin gets disabled if outlook is not running and VB6 code creates new mail item to send and display it to user. Many times mail window freezes and many such problems.
    Now the question i've here is can anyone tell me the best approach to develop Office addin like how and in what language Acrobat addin is developed? or best practice to develop it with VSTO and which is best with a why?
    Thanks,
    Hemant

    Hi,
    In many cases, if your structure is very complex, you can not get direct nested xml after content conversion. In that case, in the mapping we need to handle the generation of nested strutcure. So you use Java or xslt mapping etc.. if it is not possible via graphical mapping. Also you can do this in the adapter module.
    Here you go with good example- Generic Structure-
    /people/sravya.talanki2/blog/2005/08/16/configuring-generic-sender-file-cc-adapter
    Also file content conversion - limitations-
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50061bd9-e56e-2910-3495-c5faa652b710
    Rgds,
    Moorthy

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best approach to dealing with someone else's sphagetti code with no documentat​ion

    Hello LabVIEW gurus,
    I am just given a few software tools to add functionality and rewrite, each of which is a big spaghetti mess and each tool has 100+ vis all sphagetti, these tools control a very complex machine talking via seria, parallel, ethernet, 485 etc. and there is barely any documentation of the logic or the implemetation of the source code / what the subvis do. 
    what would be my best approach to understand this mess and recreate it in a structured way faster. it has lot of old sequence structures and just plain bad style of programming.
    any help is highly appreciated
    Thanks all

    And Do not forget about using the VI Analyzer TK!  It can reveal several obvious sources to clarify code that "Stinks" A lot of skull sweat went into that framework and it has signifigant value!
    Norbert_B wrote:
    If your task is only to ADD things, you might be interested in Steve's recommendation here.
    Norbert
    (Inside joke ahead)
    Ah, That explains the TDMS File Viewer!
    Spoiler (Highlight to read)
    You really should run that through the VIA....:smileymad
    You really should run that through the VIA....:smileymad
    Spoiler (Highlight to read)
    It can be done fairly quick
    It can be done fairly quick
    Spoiler (Highlight to read)
    How do you unspoiler?  Ah well  I'll hope a moderator can leave only the first comment "spoiled"
    How do you unspoiler?  Ah well  I'll hope a moderator can leave only the first comment "spoiled"
    Spoiler (Highlight to read)
    Note the quote from the link "The Code we inherited might have been "richly obfuscated."" "richly Obfuscaed code was a code review term used for code written by your boss... The VIA would call it something else.
    Note the quote from the link "The Code we inherited might have been "richly obfuscated."" "richly Obfuscaed code was a code review term used for code written by your boss... The VIA would call it something else.
    Jeff

Maybe you are looking for