Relocate j2ee_system_name /global

I have a cluster installation (EP6 SP2, J2EE PL23) and would like to move the /global folder from the primary server onto a clustered file service to ensure availability in the event that something unfortunate happens.  This used to be recommended but recent documentation suggests it should be left on the primary server.
What is the current advice on this?
What needs to be updated in addition to system.SharedFolder and servletpath in the config_local, cm and cfw .properties files?
If this can be relocated, does the /usr/sap folder still need to be shared? (File shares on web/application servers seem to cause concern to security types)
Thanks,
=Richard

Sorry.
Help's if I had the right OSS number anyway... Here's the contents of the OSS message.
During our fail-over testing, when the primary portal server is offline, the resident "Global" SAP J2EE directory becomes unavailable and all KM iViews fail because they are unable to access the CM configuration foundin that Global directory.
The EP 6.0 Infrastructure Guide makes mention that in a clustered Portal environment, you should move the "global" directory to a networked file system, but there is NO other documentation describing exactly how to do this.
Is there any detailed documentation available from SAP on how to move this essential KM file structure to a "secure" server? What files need to be modified to point to the new location of /global/config/cm/config
directory?
Please advise -
John
Communication
Reply Hello John, 
18.03.2004 12:10:46
Please refer to the following document of the EP6 Installation Guide:
EP6SP2_J2EECluster_v2_final.pdf
On page 27 you will find the chapter 'Mount Global Portal Directory'. according to this you have to create a mountpoint from the network file system location for all portal instances (also the primary instance). This way the global folder does not point to the primary
instance but to the network file system location.
Best regards,
Info for SAP  - 
19.03.2004 05:38:13
The Install guide section you reference is specific to UNIX platforms and its use of NFS mounts. Our implementation platform is Win2K/SQKSVR. Previously, we have set a share on this Global directory with a permissi
on of read/execute to group Everyone. As stated, we are looking to move this directory to a seperate network server to eliminate its connection to our Primary Portal server with the potential to go down, causingKM access problems from Cluster Portal server(s).
Do you have any other Win2K-specific steps to complete this directory relocation?
Regards - John
Reply Hi John, 
22.03.2004 09:03:32
Here are the directions for moving the global directory.Please document that you did this change for later maintenance of the system and please backup the files you are going to change in case you want to restore
previous configuration.This procedure is untested so please apply it on your own risk.
In the file "additionalsystemproperties" under the j2ee directory change the following properties to point to the new location:
com.sap.system.globaldir
ume.crypt.keyfile
sap.portal.Pcd.Home
Please make sure that the user running the portal has write access to thePcd.Home directory (or at least to the Export and Import directories).
Please keep the 'sap.portal.Pcd.Temp' a local directory # moving this to a network share could slow down the export/import and cause concurrency problems in the migration.
In the file "installed_portals.dat" the property
prt_global_dir
If CM is installed the files cm.properties and cfw.properties also need to be configured properly.
Of course you also need to copy the content of the global directory to the new location.
Regards,
Info for SAP, 
22.03.2004 12:00:26
If this procedure is un-documented how do you consider KM to be Highly Available?
Also you mention the "installed_portals.dat" file but you didn't seem to finish your thought, could you please provider further details regarding that statement?
Thank you,
John
Reply Hi John, 
23.03.2004 03:48:37
I meant in the file "installed_portals.dat" under the j2ee directory change the property "prt_global_dir" to point to the new location.
I will forward this message to KM so they will answer your questions about KM.
Regards,
Reply Mr. Sperger, 
23.03.2004 05:00:31
The main configruation files for the CM are:
cm.properties and cfw.properties
In the cm.properties the following entry must point to your desired location:
System.SharedFolder=
<server>/<location>/global/config/cm
In the cfw.properties the following entry must point to your desired location:
System.SharedFolder=
<server>/<location>/global/config/cm
file system location for the files above:
D:\usr\sap\PORTALNAME\j2ee\j2ee_INSTANCE\cluster\server
\services\servlet_jsp\work\jspTemp\irj\root
\WEB-INF\portal\system\cm
In the config_local.properties the following entry must point to your desired location:
servletpath=//<server>/<location>/global/config/cm
file system location:
D:\usr\sap\PORTALNAME\j2ee\j2ee_INSTANCE\cluster\server
\services\servlet_jsp\work\jspTemp\irj\root
\WEB-INF\portal\portalapps\com.sap.km.application\lib
As mentioned above by Boris, the CM contents must then reside with the new given location.
Regards,

Similar Messages

  • Global objects imported into dynamic libs

    Hi
    This might be more of a linker question.
    I've noticed that when I have things like statics/globals/inlines in a static lib, then they also get imported into dynamic libs that also use them. This causes problems since there's no dependency rule in my makefiles for this (since the dynamic lib does not depend on the static lib source files). Here's a tiny example.
    a.h:
    #include <string>
    class a
    public:
    static std::string s;
    a.cpp:
    #include "a.h"
    std::string a::s = std::string("Hello");
    build "liba.a"
    CC -c -O2 -I. -o a.o a.cpp
    CC -xar -o liba.a a.o
    b.h:
    void foo();
    b.cpp:
    #include "a.h"
    #include <iostream>
    void foo()
    std::cout << " foo() " << a::s << std::endl;
    return;
    build "libb.so":
    CC -c -O2 -KPIC -I. -o b.o b.cpp
    CC -R/usr/lib -G -h libb.so.1 -o libb.so.1.0.0 b.o -L. -la
    Now here comes the problem, libb.so contains a copy of the a::s string
    paulf> nm -C libb.so | grep a::s
    [78] | 69280| 4|OBJT |GLOB |0 |19 |a::s
    Now, if I edit a.cpp and change the string from "Hello" to "Goodbye", then if I run make for liba.a, it gets rebuilt, but if I run make for libb.so, it doesn't (since it doesn't depend on a.cpp).
    But...
    paulf> strings liba.a | grep Good
    �]�Goodbye
    and
    strings libb.so
    foo()
    Hello
    Clearly this is not good news. If I link an app with these two libraries, then there are two instances of the same 'a::s', but with different vslues.
    Does anyone have a good explanation for why this happens, and how it can be avoided?
    A+
    Paul

    There are at least three things wrong with this picture.
    1. The shared library libb.so pulls in liba.a, but the objects in liba were not built with PIC code. The libb library will not actually be shared, since it contains non-relocatable code. For a library to be shared, everything in it must be relocatable.
    2. libb.so depends on something not expressed in the dependencies. That seems to be one of your questions. Add liba.a as something libb.so depends on. If liba.a needs to be updated, libb.so will automatically be rebuilt.libb.so: b.cpp liba.a 3. It is seldom a good idea to link a shared library B.so with a static library A.a, if A.a can be independently linked by program parts that use B.so.
    3.1. You can get version skew of A.a if a client of B.so gets Aa from a different source. Bits of A.a of different versions might not mix well.
    3.2. Different parts of A.a can appear in different shared libaries, which can cause havoc at library init and fini time. That is, during shared library C initialization, you might wind up calling something from A that was in shared library B, which is not yet initialized. Initialization of C is interrupted so as to initialize B so that the function in B can be called. You might now be doing initialization out of order.
    If B.so links with library A, and a program that uses B.so also links library A, you should make library A a shared library. You then avoid all the problems listed above.

  • Anyone using Global Correlation?

    Hello,
    We are the AIP-10 IPS module in our ASA firewall, I am thinking of turing the Global Correlation feature on as it has been on test for a few weeks, is anyone elase using this feature?
    Thanks

    I either get the QT movie delivered on DVD-ROM or delivered on an external FW drive already in the size and format that I want. I don't use DV output (I run the video in a window on a 2nd monitor).
    I create a separate logic song for every music cue, which I find works for me. it minimises the need to fiddle with locking objects to TC when there are tempo changes etc. in order to hear the other music that I am working on in the film to get an overall global feel etc, I bounce a work in progress from each song and place them in the right place on a dedicated track in every music cue logic song. time consuming but effective.
    for the film audio, I extract it to a track from the QT and run it in logic. can be a pain when having to change offsets but I have a technique for cutting the audio file to realign it with the film if I need to offset it.
    what I would like to see is far better implementation of movies in logic in general. I tried using the detect cuts feature for a while but found it pretty much useless for me. I am still finding that logic forgets what QT was saved with a song, so I am having to relocate it again, and it irritatingly defaults to looking in the movies folder on my system drive.. so instead of fighting it, I copy the QT to that folder when I can, ie, when it is not too big a file. in general I would like a more elastic and intuitive way to glide a movie start point as far back or forward as I would like, to easily align the start point of a music cue with bar one.
    that's all I can think of for now. any other Qs just ask.

  • Global data getting reset when running under IIS?

    We have a scenario using IIS with an ASP.NET web service written in VB.NET. When a call to the web service is made, the web service calls a native dll (written in C, compiled using VS2010) using platform invoke, which in turn calls into our product API:
    VB.NET web service -> native library (p/invoke) -> native API ....
    Web service requests are successfully completed and the system runs without problem for hours. A trace of the native API shows it is being called by multiple processes and multiple threads within those processes.
    The main native API dll contains a static global variable used to detect whether it is the first time it has been called and run initialization logic if it is. This dll is itself linked to a second dll that contains a global variable used to detect if it is
    the first time it has been called.
    After some hours the trace shows that the native API is invoked by an existing process but that the initialization logic is being exercised again, even though the global variable was set to indicate not first time and is never reset.  One theory was that
    the first process has ended and a new process has started almost instantaneously using the same process ID. However this is not the case as existing thread IDs from the same process are seen to write to the trace again after the first time logic has executed
    for the second time, indicating the process has not restarted. The problem occurs regularly.
    It is as though the process's global data has been initialized again and malloc'ed memory freed while the processing is still running. Is there any way this is possible when running under IIS?
    There is an internal thread which waits on a blocking read of a named pipe (via ReadFile), and when the problem occurs, the ReadFile call ends with ERROR_NO_ACCESS, which appears to indicate the malloc'ed buffer is no longer valid, again implying something
    has happened to the memory allocated to the process.

    Suggestting you asking it on:
    http://forums.iis.net/

  • Multiple users accessing the same data in a global temp table

    I have a global temp table (GTT) defined with 'on commit preserve rows'. This table is accessed via a web page using ASP.NET. The application was designed so that every one that accessed the web page could only see their data in the GTT.
    We have just realized that the GTT doesn't appear to be empty as new web users use the application. I believe it has something to do with how ASP is connecting to the database. I only see one entry in the V$SESSION view even when multiple users are using the web page. I believe this single V$SESSION entry is causing only one GTT to be available at a time. Each user is inserting into / selecting out of the same GTT and their results are wrong.
    I'm the back end Oracle developer at this place and I'm having difficulty translating this issue to the front end ASP team. When this web page is accessed, I need it to start a new session, not reuse an existing session. I want to keep the same connection, but just start a new session... Now I'm losing it.. Like I said, I'm the back end guy and all this web/connection/pooling front end stuff is magic to me.
    The GTT isn't going to work unless we get new sessions. How do we do this?
    Thanks!

    DGS wrote:
    I have a global temp table (GTT) defined with 'on commit preserve rows'. This table is accessed via a web page using ASP.NET. The application was designed so that every one that accessed the web page could only see their data in the GTT.
    We have just realized that the GTT doesn't appear to be empty as new web users use the application. I believe it has something to do with how ASP is connecting to the database. I only see one entry in the V$SESSION view even when multiple users are using the web page. I believe this single V$SESSION entry is causing only one GTT to be available at a time. Each user is inserting into / selecting out of the same GTT and their results are wrong.
    I'm the back end Oracle developer at this place and I'm having difficulty translating this issue to the front end ASP team. When this web page is accessed, I need it to start a new session, not reuse an existing session. I want to keep the same connection, but just start a new session... Now I'm losing it.. Like I said, I'm the back end guy and all this web/connection/pooling front end stuff is magic to me.
    The GTT isn't going to work unless we get new sessions. How do we do this?
    Thanks!You may want to try changing your GTT to 'ON COMMIT DELETE ROWS' and have the .Net app use a transaction object.
    We had a similar problem and I found help in the following thread:
    Re: Global temp table problem w/ODP?
    All the best.

  • How do I move from one external drive to another, without the "Relocate Masters" command?

    I have my masters referenced on an external hard drive, and I've been mirroring that drive with another so that I have a backup (the volume names were ApertureLib1 for the primary drive and Photo7 for the backup).  I've filled that drive, so it will no longer be my primary master drive (i.e. all new masters going forward are going on a new, separate drive).  I want to keep one of the two drives online and put the other away as an offsite backup.
    The catch is that the primary drive (ApertureLib1) does not place nicely with my Lexar Firewire 400 CF reader; when I insert or eject a card, it has a habit of disconnecting itself.  The backup drive (Photo7) does not exhibit this issue.  So I decided that I'd just rename Photo7 to ApertureLib1 and be on my way; the folder containing all the masters (PhotoLib) is mirrored by using Arrsync (an rsync GUI), so all the files should be there.
    Great theory, except that Aperture does not recognize the renamed Photo7 drive as ApertureLib1, despite the system seeing it as such (both in the Finder sidebar and showing up in /Volumes as "ApertureLib1").  The "locate referenced masters" dialog shows ApertureLib1 as 'offline' and does not give me any opportunity to point to the second drive as the master location.
    I assume that Aperture is somehow tracking a separate unique identifier for the drive. Does anyone know what this might be and how I might convince it to treat the was-a-backup drive as being the One, True Location for this set of masters?
    Many thanks.
    (and I really don't want to use "relocate masters" because I'd first need to delete the exsiting copy on the second drive and then move over all of the images from the first drive, before copying them back to have a backup)

    Hi Kevin,
    I understand what you tried to do but it doesn't work that way. Swapping drive names will just mess things up.
    You should be able to reconnect the files though: in the Locate Referenced Files dialog make sure you click the Show Reconnect Options button — this will give you access to all the connected drives. Locate one of the files and hit Reconnect All. Should do the trick.
    Best

  • Local attributes - global attributes tradeoff

    Hi, MDM experts.
    Can you, please, share your experience on business partners repository modeling.
    I build custom business partners repository. While creating it I came to a question - whether local system attributes of business partner should be modelled in that repository?
    Intrinsic attributes like Full Name, State Identity Number and so on should be definitely modeled. Attributes specific to our organization but those that span many of our systems should also be modeled I think.
    But what's about some specific attributes that are relevant only for one of the systems being integrated? To be concrete, imagine we have SAP ERP system as one of the systems in landscape and such attribute of our business partner as 'Purchasing organization'. In our case this table is SAP ERP specific and none of our other systems have such entity in their data model.
    <b>The question is - is it reasonable to have local system attributes and lookup tables implemented in central MDM repository?</b>
    If yes then isn't our repository going to be overloaded with all that local attributes and lookup tables of every client system?
    If no, then how should process of central creation of business partner look like? The problem is in this case Creator won't be be able to assign all attributes he would like to and he will have to login to each of local system and assign these values after central creation. Moreover, client systems can refuse to create new record automatically in case some of attributes are missing. For example such situation is typical for Idoc inbound processing .
    Have you any suggestions on streamlining the data model and BP central creation process ?
    Regards,
    Vadim Kalabin

    Hi vadim,
    These are my thoughts on your issue.
    I feel both the attributes should find place in the same repository.
    This is not going to overload the system. In some typical MDM Implementation the volume of Main table records will very huge and the Local and global attributes will only occupy a less share only on the total records.
    Also the practice is that MDM DB Server and the core server runs separately.
    Pl find if this Article is use for you.
    <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d0d8aa53-b11d-2a10-aca2-ff2bd42a8692">MDM Data modelling do's and dont's</a>
    Regards,
    Vijay

  • Problem in Creation of Oracle Session in Global.asa

    Dear all,
    I tried to run an ASP example on OO4O and created the following global.asa.
    <OBJECT RUNAT=Server SCOPE=Application ID=OraSession
    PROGID="OracleInProcServer.XOraSession"></OBJECT>
    <SCRIPT LANGUAGE=VBScript RUNAT=Server>
    Sub Application_OnStart
    'Get an instance of the Connection Pooling object and create a pool
    of OraDatabase
    OraSession.CreateDatabasePool 20,40,200,"XXX", "xxx/xxx", 0
    End Sub
    Sub Application_OnEnd
    OraSession.destroyDatabasePool
    End Sub
    </SCRIPT>
    However, the IIS reported the following error when running.
    "An error occurred while creating object 'OraSession'".
    I have used the same connection string on a VB project. The connection works perfectly. Could anyone know why the connection cannot setup on ASP?
    Thank you very much
    Samuel

    I am getting the same problem. If you have found any solution, please let me know.

  • How do I install a language XPI globally and make it the default language?

    I am using Firefox on unices and various Linux distributions (the less common ones). Installing Firefox is easy, but it is always in English. I get most of my interfaces (gnome, libreoffice, gimp, etc) in my preferred translation simply by setting the proper locale (export LANG=xy_XY.UTF-8) before starting Xorg. I found and downloaded my preferred language pack for firefox, which comes in an XPI archive. Opening that XPI file in Firefox (via an URL of file:///path/to/file/langpack.xpi) works smoothly, the translation gets installed under the user's own profile and it shows up in the add-ons (under the new category: Languages). Unfortunately, I still see no way in Firefox preferences to set this added translation as the default. Firefox still starts with the English interface. On SeaMonkey, there is a combobox on the Appearance panel that allows me to select the language of the interface from the list of installed/available languages. I cannot find such option on the panels of Firefox. So far, I had to install a language-changer add-on to be able to set the interface to my native locale. Even though my users never need to switch to any other locales.
    Q1., Once a language xpi is installed, how do I set the interface to use that language?
    Q2., How do I install the language xpi and make it the system-wide default, so that all my users see Firefox starting with the native (non-English) interface?
    I found a handful of description for Q2, but they appear to be outdated and do not work. These include the -install-global-extension argument to Firefox, placing the xpi file as it was downloaded into the "extensions" folder, placing the xpi file renamed to the extension ID into the "extensions" folder, unzipping the contents of the xpi into a subfolder of the "extensions" folder named as the extension ID. Please note, these attempts might have failed because I placed the extension NOT where it was supposed to go. The instructions refer to the global extension folder as <installation folder>/extensions, but on my system no such folder exists. Firefox seems to get installed into /usr/local/lib/firefox and there is no "extensions" subfolder there in. However, there is a "browser/extensions" subfolder, but placing my XPI there did not trigger any effect. Also, I manually created the "extensions" subfolder in /usr/local/lib/firefox and moved the xpi there (on various names, unzipped and in a whole) without the expected result.
    Any suggestion is very much appreciated!
    My goal is to install the firefox package, then run something from the command line resulting the non-English translation in the downloaded XPI becoming the system-wide default for all users. A step forward would be to know for certain where that "extensions" folder must be on my specific OS. How do I query that?

    Excellent. These guides got me as far as installing my preferred language pack, which gets added to the add-ons automatically (assuming the user says yes to the appearing question when Firefox is started).
    However, the GUI does not change unless the user enters "about:config" and changes the locale.
    Is there a way to do this from the command line? I mean, changing the locale settings.
    And preferrably for all users. So even when new users get added and they start Firefox the first time, they see it appearing with the preferred locale and not in English.

  • SPED PIS/COFINS - Plano de contas Global e Local.

    Olá Pessoal,
    Após aplicar todas as notas referentes ao SPED PIS/COFINS começei rodas os arquivos e validá-los, porém uma das primeiras coisas que encontrei foi aqui estamos utilizando o plano de contas Brasil(ACBR) e temos o plano de contas global(PCOA), pela legislação brasileira para o envio do arquivo pelo menos para o SPED ECD tivemos que mandar as contas do plano de conta local(ACBR) com as contas alternativas, ou seja, apenas contas sem letras. Porém hoje quando rodo o SPED PIS/COFINS hoje nos registros 0500 e F100 as contas que estão sendo carregadas são as contas do plano de contas global e não as contas alternativas.
    No SPED ECD existe um flag que podemos usar para as contas alternativas, nesse SPED PIS/COFINS não deveriamos ter também ???
    Obrigado,
    Michael Peretto

    Bom dia Vini,
    Não está  usando o report standard, está?
    O módulo de formatação no standard cuida disso independente do usuário:
          WHEN 'P'.
    * --- Change decimal separator and delete spaces
            REPLACE '.' WITH ',' INTO lv_field.
            CONDENSE lv_field NO-GAPS.
    Atenciosamente, Fernando Da Ró

  • Class 6 SD card causes problems / lock ups in Droid 2 Global

    My Droid 2 Global phone came with an 8GB SD card and I want to increase it to a 16GB card.  I purchased an adata 16GB Class 6 micro SD card.
    I copied all my data from my 8GB card to my laptop.  I then copied the data from the laptop to the 16GB card.  Put it in the phone and started the phone. 
    The phone had difficulty seeing the SD card and locked up.  Removed battery, reinstalled, restarted.  Similar problem.  Tried various apps and calls - worked (poorly) on some tasks, and locked up on others.
    Formatted the card in the camera and then used the USB cable to move data from the laptop - similar symptoms.
    Formatted the card in the laptop, placed the card in the phone, and used the USB to copy data to the card - same problems.
    Stepped up the process by performing a hard reset on the phone, formatting the SD card, reinstalled apps and tested before adding personal data - still locks up.
    I reset the phone again and put my original 8GB card in - once apps were reinstalled and data was pulled onto the card and everything works normally.
    I really want to use the larger SD card, but I haven't been able to get it functioning..
    I even saw a message somewhere that stated the Droid 2 has problems with Class 6 cards but have found nothing supporting that concept.
    Any ideas?
    Thanks, Grandpa777

    When I replaced the SD in my D2G card with a 16GB one, I researched several cards and class ratings.  Two things seemed to crop up over and over.  Class ratings as an indication of maximum speed are not the same across manufacturers.  Some Class 2 were faster than Class 4 and class 4 faster than Class 6, etc.  Not all, but it was a crap shoot.  Second, Class 10 cards did not work well in all Droids, nor did some Class 6. So I bought the following: 
    SanDisk 16 GB microSDHC Flash Memory Card SDSDQ-016G Class 2.
    It has worked flawlessly for 8 months.  And seems fast enough for my needs.
    My recommendation, for what it's worth: Buy a 16GB SanDisk SD card no higher than Class 4.

  • How to log strings stored in Station Globals and/or PreUUT values?

    Hi all,
    I have a Station Global that is persistent across all UUTs in a particular PC. I have also created a custom PreUUT dialog to obtain user input that applies to the upcoming UUT (I pass this user input to the UUT by storing it in a File Global). Both the Station Global and the File Global store a string.
    What is a good way to log these strings into the ATML report and SQL database?
    Currently, the best solution I can think of is:
    Create a LabVIEW VI that takes a string input and passes it straight through to the output
    Pass the Station Global (or the File Global) into the VI input
    Assign the VI output to Step.Result.ReportText
    This seems rather cumbersome though. Is there a simpler way to achieve this? (i.e. is there a built-in TestStand action that logs a variable directly into the report?)
    In case it's important, I'm using TestStand 2013 SP1 and I'm using the default report templates: tr5_horizontal.xsl for ATML, and C:\Program Files\National Instruments\TestStand 2013\Components\Models\TestStandModels\Database\SQL Server Create Generic Recordset Result Tables.sql for SQL.
    Thanks!
    Solved!
    Go to Solution.

    JKSH,
    You can handle this with the Additional Results functionality in TestStand, which can be configured in the settings for an existing step, or as a standalone step type. We have an example of this in the TestStand Fundamental Example series here: http://www.ni.com/product-documentation/52354/en/#toc3   (Look for section 3, "Adding Custom Data to a Report"
    I hope it helps, and let us know if we can do anything else to help!
    Daniel E.
    TestStand Product Support Engineer
    National Instruments

  • Deployed KM Scheduler Task Does Not Appear in Global Services / Scheduler T

    Hi All,
    I've deployed a portal service and a KM scheduler task in the same DC but cannot see the scheduler task listed under System Admin -> System Config -> Knowledge Management -> Content Management -> Global Services -> Scheduler Tasks.
    The service works correctly and performs its periodic processing once on initialisation of the service, and I can see the resulting log statements to make sure it has worked.
    Both the service and the scheduler task are packed inside the same DC, which builds and deploys without any errors.
    There are no errors in the log regarding the deployment of the DC, or the scheduler task.
    There is a log statement about registering the classloader for the DC, which I assume is the statement in the IRFServiceWrapper init() that goes:
    CrtClassLoaderRegistry.addClassLoader(this.getKey(), this.getClass().getClassLoader());
    But I still dont see the task listed in Global Services -> Scheduler Tasks.
    Does anyone know what might be causing this or how to diagnose the problem further?
    Cheers,
    Steve

    Hi Srini,
    No it doesn't it just has the run method which is generated automatically.
    I have created a local Portal Application project, which is not in a DC, and this also does not have a default constructor ether but appears in the Scheduler Tasks list as soon as it is deployed.
    I also tried to create a Portal Applicaiton DC and Portal Applicaiton (Packaged as SDA) DC and despite the scheduler task being the same, both the DC tasks do not appear in the Scheduler Tasks.
    The only one I can get to work is the local Portal Application project. This is no use as we cant store this in DTR (unless someone can explain how to do this???).
    The Portal Application project packages up the RF Framework JARS inside its PAR file, but the DC and SDA projects dont, so I tried including them via an Assembly Public Part to an External Library DC, which does include the JARs in the deployment files but they still dont appear in the Scheduler Tasks.
    Has anyone out there got a working Scheduler Task in a Portal Application DC?
    Cheers,
    Steve

  • Use of global variables like g_cnt_transactions_transferred in the LSMW

    Hi SapAll.
    when i had a look at the some of the LSMW's whic use IDOC as the object of uploading data into SAP from external Files i have found in the coding under the step "Maintain Field Mapping and Conversion Rules" that they use some of the global variables like below
    .if p_trfcpt = yes or sy-saprl >= '46A'.
      EDI_DC40-DOCNUM = g_cnt_transactions_transferred + 1.
    endif.
    .EDI_DC40-CIMTYP = g_cimtyp.
    .EDI_DC40-MESTYP = g_mestyp.
    .EDI_DC40-MESCOD = g_mescod.
    .if p_filept = yes.
      EDI_DC40-SNDPOR = g_fileport.
    elseif p_trfcpt = yes.
      EDI_DC40-SNDPOR = g_trfcport.
    endif.
    my doubt is where i can find these variables 'g_cnt_transactions_transferred ','g_cimtyp','g_mescod','g_fileport','g_trfcport' in the LSMW and what is the use of the variable  'g_cnt_transactions_transferred ' in the LSMW.
    I have treid to find out the above listed variables looking in step 'Maintain Field Mapping and Conversion Rules' under global variabels list and the other lists also but i couldnt found.
    can any one help me in this ?
    regards.
    Seetha.

    Hi Seetha,
               In the LSMW Workbench go to the option user menu.  And check the option display conversion program.
    Now when you execute with the radio button on dislplay conversion program, you ll see the code that got generated in the background while you built your LSMW.
    The global variables that you have mentioned are bound to be there in this program generated in the background..
    You can put a break point here and see for yourself what the value of these global variables are at runtime.
    File port, TRFC port , no. of transactions executed by one run of the LMSW Idoc program , message type are some of the fields that you have asked for .
    Regards,
    Arun

  • What is diffrence between "Partner Specific" and "Globally Available" Addon

    I would like to know what difference between Partner specific and globally available solution is? OurAdd-on will be available in all the countries including US.  We are thinking of using B1 licensing mechanism instead of using ours. Which is best solution type u201CPartner Specificu201D or u201CGlobally Availableu201D to suit our needs? Which license type will protect our interests and allow us controlling that a customer may not be able to download licenses more than specified in agreement?
    Thanks
    Abhishek Jain

    Without having a look in your system it is hard to say what can be the reason for that.
    you have to know that the master data changes does not reflect automatically in inventory controlling.
    I suggest you to set up the report RMCBNE32 as periodical (e.g. monthly) background job.
    it also would be good to compare the walues in table S032 and MBEW for those materials. This also can be a reason for a difference. The values mus be the same on plant level.

Maybe you are looking for

  • Tags in CS3 not displaying correctly

    I have pages that displayed correctly in Dreamweaver CS2. In CS3, the design view is not displaying properly. For example, if the tag is this: <p class="Body">, design view is not showing the page properly. If I reapply the CSS style, CS3 inserts thi

  • PC taking a LONG time to load everything before the win logo, after BIOS update.

    Every single time I turn on my PC, it takes a REALLY long time to start the first DOS-like screen and then it takes another while before the windows logo shows up. About 4-5 mins before the win logo. It used to be fast and sweet before updating my MS

  • Why is CS6 on trial when I've had it for a year

    Last year i started my game programming course and got a CS6 master collection code that I used and then a few months later they told me to create an account which I did. I then used my code again attaching it to my account instead of just my compute

  • Billing / No Accounting Generated

    Hi Guys, We have raised a Credit Memo using ( VF01 ) and the corresponding accounting entry has not been generated. When i click on the accounting it says " The accounting document has not yet been created ". I went to environment to see the document

  • Permissions error when restoring itunes library from old hdd's TM backup to new SSD OS X Music folder...

    Hello, I got my self a new SSD and installed Lion in it from the internet. I tried copy pasting the itunes library from the most current backup in the TM drive that was used to backup the old hdd. I keep on getting "not enough permission" error for e