Concurrent-access logging; best approach

I have a script that runs on multiple machines at once, and I would like to provide some ongoing single source logging of these activities. As a general rule I think the chance of two machines touching the log at once are slim, but when the script could
be running on hundreds of machines at once, it's an issue I think I need to at least be cognizant of.
Currently I am logging with a separate txt file for each machine using the basic Out-File cmdlet, but for a common log I am thinking an XML file would be better, as I could provide current info based on machine name, which could then be reviewed in Excel,
for example. My question is, is this a viable option? Perhaps with some code that checks to see if the file is already being written to and waits? I am thinking system.io.filestream.lock() may be an option.
Or do I really need to look at either a database approach, or perhaps individual files for each machine, and some sort of external aggregator utility to update the XML? My test environment is not hundreds of machines, so I want to make sure I get this right
from the get go, rather than going down a road that works for me and then blows up for a larger environment.

OK, it seems I was right in my initial interpretation, that Collectors must be running a Server OS, and I am certain I am not going to have access to a server for my purposes. I'll be lucky to get folks to actually us a VM running regular Windows. But, I
like the event  log idea. Not least of which because I recently tried testing file locks and it doesn't seem to work across a network, one machine can lock a file, but it appears unlocked to another machine.
So I started looking at remote Events, and I would think that this would work.
write-eventlog -logName:PxTools -source:MachineContext -computerName:Px_Server -message:'Testing' -id:111
I am currently logged in on one machine as an admin user, and that user has rights on the remote machine. Th event log already exists on the remote machine as well. But when I run the script (Using Run as Administrator) I get an error that the path can't
be found. However, I can ping the remote machine with no problems.
I looked in to Jobs, but it seems you can't easily run remote jobs without enabling demoting. And I found a Scripting Guy article that talk about using Invoke-Command -asJob, but this failed with no meaningful error information.
So, am I barking up another empty tree? Or am I missing something? I do understand that I could enable demoting on all machines but it's that level of prep work that I can't depend on. And again, I am looking at logging updates from hundreds of machines,
but the vase majority of the time there will likely be minutes between logging activity. I just want to account for the possibility of concurrency.
FWIW, the kludge I am looking at is to pass each machine the name of the log file, and when ready to log the script looks to see if there is a file there with that name. If there is it renames the file with the machine name on the end, write's its log info
and then renames the file back. If it doesn't find the file there, but it does find a file that has been renamed then it waits a few seconds and tries again. It seems like a kludge, but if remote events can't be made to work within the constraints I have,
that seems like a viable solution unless someone sees a problem I don't?

Similar Messages

  • Access rights, best approach?

    One of our business rules restricts updates on 3 fields in a table for one type of user, and 5 fields in that same table to another type of user. I could use 2 views to implement these restrictions, but is there another approach? I'd like to create a single Oracle Form for all users, and avoid the complexity of conditionally using one of many views during UPDATE.
    With object privileges, is there a way to restrict UPDATE/INSERT/DELETE on particular columns in a table, while not restricting other columns, without the use of views?
    -Thanks
    cf

    An extension of this question:
    We have a user which is allowed to insert a record into a table, but can only create data in 5 of 10 available fields, but we want default values inserted into the 'unavailable' fields.
    I tried to test this out by starting with the following, but I couldn't get data to load:
    SQL> create table chuck
    2 (myname varchar2(10),
    3 age number default 26);
    Table created.
    SQL> insert into chuck values ('Chuck');
    insert into chuck values ('Chuck')
    ERROR at line 1:
    ORA-00947: not enough values
    SQL> insert into facet.chuck values ('Chuck',1)
    1 row created.
    I wouldn't ask if I could figure out how to test the concept myself.
    -Chuck

  • Logging the Userid ... what is the best approach?

    Hi guys,
    I have a hard time to decide on the best approach to audit user information.
    Consider the the usual information you use for auditing purposes. Normally you have the columns:
    created_on
    created_by
    updated_on
    updated_by
    In an Apex application I would record the value of :APP_USER in the columns created_by and updated_by.
    But what happens if the login name changes? Would you go on and update all relevant tables to update the now changed name?
    There could be auditing triggers involved which you would have to disable.
    On the other hand I have checked the tables of Oracle Applications. Auditing is important for SOX compliancy I believe. They reference the USER_ID (number), not the login name.
    But in many cases you also manipulate tables from within a sqlplus session and thus you don't have a value for APP_USER for which you could do a reverse lookup of the USER_ID. Thus I usually do my logging as nvl(v('APP_USER'), user).
    But then I will run into problems when the username changes.
    What is your take on this? Any suggestions?
    Regards,
    ~Dietmar.

    Hi Denes,
    technically you are right, they don't actually use referential integrity. But they store the value of FND_USER.USER_ID as created_by last_updated_by.
    If I was to record the user_id then I would use a foreign key to an existing local user table.
    This way I could always reference the user since a delete would not be possible.
    And if the name changes, well then create a new account and dissable the old one.Well, good point. But you would loose all references you might have created for this user (if applicable): user preferences, privileges, etc.
    I had an actual use case in a German bank a few years ago. They changed the naming convention for all User accounts in all systems. Thus only the login name changed but the user identity stayed the same.
    ~Dietmar.

  • What is the best approach to store "dynamic" user accessibility ?

    Hi all,
    We are implemennting security in our ADF BC + Faces application. There is always requirement to hide/disable functionalities that a user is not allowed / authorized to access.
    Usually we do this during development time, based on what role the user is in. Using this approach, there is no way to change that , or give access to new role during runtime (after the deployment). This is what I call "static accessibility".
    In our apps, we need the give / revoke access to some functionalities during runtime. This is what I call "dynamic accessibility".
    One approach that comes to my mind is :
    We define the accessibility to each function that we want to protect (hide/unhide) in database tables. Then every time a use enter a page, read these tables through JDBC calls then store tha data in Managed Bean.
    Has anybody here implement this "dynamic accessibility" ?
    Is there a better approach ?
    Thank you very much,
    xtanto

    Saeed,
    SRDemo uses a managed bean that checks is user in role when called and returns true or false. Another approach - more elegant - is the use of a security property resolver as available
    http://jsf-security.sourceforge.net
    Regarding dynamic permissions, the use of JAAS seems to be a good solution. ADF Security uses JAAS permissions to assign component access to users.
    E.g. if the user role manager has access to edit the salary column, then the security constraint added to the update button could be
    #{!bindings.<attribute binding>.updateable}
    Note that ADF Security sets the updateable flag on an attribute.
    Or you use
    #{bindings.<iterator binding>.permissionInfo.create}
    #{bindings.<attribute binding>.permissionInfo.update}
    #{bindings.permissionInfo['pageDefName'].view}
    etc. to determine what a user can do or can't.
    Note that I haven't tested if the permissions are cached for a specific application or if they are checked each time again. If they are checked each time then this would be a performance penalty but allows to dynamically set permissions to user groups as obviously needed in your applications.
    No, we don't have tutorial for this. But a Oracle By Example for end-to-end security implementation is on my collateral plan for JDeveloper 11 (just need to write a doc writer ;-) )
    Frank

  • Best approach - using materialized views

    Hi
    We are using materialized views for structuring complex business data.
    These views get refreshed every night. However, the refresh jobs runs slow in some cases.In such scenarios, the job would still be running while client applications (JDBC) try to access data from the views. This would result in client calls to wait for a long time or to timeout. I would like to know what is the best approach in such scenarios to ensure data availability and performance?
    Thanks
    RC

    See
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14226/repmview.htm#i31171
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14226/repmview.htm#sthref491
    and
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/basicmv.htm#sthref521
    (which also lists the Restrictions)
    You can use a Fast Refresh if it meets the restrictions. You need to have a MATERIALIZED VIEW LOG created on the source table(s).
    Hemant K Chitale

  • Concurrent Access for Crystal Reports for eclipse

    Hi,
    May i know is there any concurrent access limitation when viewing the reports in web using JRC?

    The performance does get better after the initialization of the JRC application.  Because it is a pure java application without any reliances on servers for processing power, it will rely on the speed of the application server.  In the initialization, all of the libraries get loaded in to the classpath so this does take some time.  Generally speaking the performance will get better after this because everything has been loaded in to memory; that is until you restart the application server.
    The JRC will be a bit slower when rendering a large report.  Depending on the size of that report, you may be looking at between a few seconds and several minutes in processing time.
    Whether or not you use the JRC will depend on the number of users you anticipate having at any given time for your application as well as the general size of your reports.
    Crystal Reports Server comes with a set number of licenses.  Initially it comes with 5 and you can purchase up to 20 or 25.  This means you could potentially have about the same number of users as you would with a JRC application, but if you have large reports then you could take advantage of the benefit of being able to schedule those reports (set them to run during an off time so your users can view the instances quickly when they need to).  You do have to be more mindful of how you use licenses with this product, since for each user logged on to the system there will be a license used.  There are many additional benefits, including performance that can be had with CR Server.  One key difference would be in the cost of the product:  The JRC is essentially free, whereas CR Server is not. 
    I would suggest reading our product documentation and applying it to your situation to determine what implementation would work best for you.

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • File name format of rotated access log

    Hi,
    I'm using wls5.1sp9 on Solaris.
    Is it possible to specify a different file name format of the rotated http access
    logs?
    Instead of getting files named access.log0001 and so on, I would prefer access.log.<date>
    or some other custom format.
    Best regards,
    Torleif Galteland

    Hi,
    Thanks for your response. But I have another query. The name of the file as per your reply is like UsageReport.xls correct? Now my query is, should it contain some date,account id/orgid etc associated for that usage. The reason for my query is that if MSP
    downloads the usage for different orgs having same bill date then it would conflict with the different usage files.
    Thanks And Regards,
    Sumanta Saha

  • Best approach for IDOC - JDBC scenario

    Hi,
    In my scenarion I am creating sales order(ORDERS04) in R/3 system and which need to be replicated in a SQL Server system. I am sending the order to XI as an IDoc and want to use JDBC for sending data to SQL Server. I need to insert data in two tables(header & details). Is it possible without BPM?  Or what is the best approach for this?
    Thanks,
    Sri.

    Yes, this is possible without the BPM.
    Just create the Corresponding Datatype for the insertion.
    if the records to be inserted are different, then there wil be 2 different datatypes ( one for header and one for detail).
    Do a mutlimapping, where your Source is mapped into the header and details datatype and then send using the JDBC sender adapter.
    For the strucutre of your Datatype for insertion , just check this link,
    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm
    To access any Database from XI, you will have to install the corresponding Driver on your XI server.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3867a582-0401-0010-6cbf-9644e49f1a10
    Regards,
    Bhavesh

  • What's the best approach to resetting Calendar data on Server?

    I have a database format error in a calendar that I only noticed after the migration to Server on Yosemite.  I'll paste a snippet from the Error Log in at the bottom that shows the error - I've highlighted the description of the problem in red.
    I found a pretty cool writeup from Linc in a different thread, but it's aimed at fixing a similar problem for a local user on their own machine rather that an iCal server like what we're running.  Here's the link to that thread: Re: Calendar crashes on open  For example, does something like Calendar Cleaner work on our server database as well?
    In my case I think I'd basically like to gracefully remove all the Calendar databases from Server and start fresh (all the users' calendars are backed up on their local machines, so they can just import them into fresh/empty calendars once I've cleaned out the old stuff).  Any thoughts on "best approach" would be much appreciated.
    Here's the error log...
    File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twi sted/internet/defer.py", line 1099, in _inlineCallbacks
    2015-01-31 07:14:41-0600 [-] [caldav-0]         result = g.send(result)
    2015-01-31 07:14:41-0600 [-] [caldav-0]       File "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python 2.7/site-packages/txdav/caldav/datastore/sql.py", line 3635, in component
    2015-01-31 07:14:41-0600 [-] [caldav-0]         e, self._resourceID
    2015-01-31 07:14:41-0600 [-] [caldav-0]     txdav.common.icommondatastore.InternalDataStoreError: Data corruption detected (Invalid property: GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     VERSION:2.0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CALSCALE:GREGORIAN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     PRODID:-//Apple Inc.//Mac OS X 10.8.2//EN
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTART:20121114T215900Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTEND:20121114T232700Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CLASS:PUBLIC
    2015-01-31 07:14:41-0600 [-] [caldav-0]     CREATED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DESCRIPTION:Flight leg 2 of 2 for trip from MSP to LAX\\nhttp://www.google.
    2015-01-31 07:14:41-0600 [-] [caldav-0]      com/search?q=US+29+flight+status\\nBooked on November 8\\, 2012\\n
    2015-01-31 07:14:41-0600 [-] [caldav-0]     DTSTAMP:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     GEO:33.4341666667\\;-112.008055556
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LAST-MODIFIED:20121108T123850Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     LOCATION:Sky Harbor International Airport\\, Phoenix\\, AZ
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SEQUENCE:0
    2015-01-31 07:14:41-0600 [-] [caldav-0]     STATUS:CONFIRMED
    2015-01-31 07:14:41-0600 [-] [caldav-0]     SUMMARY:US 29 from PHX to LAX
    2015-01-31 07:14:41-0600 [-] [caldav-0]     URL:http://www.hipmunk.com/flights/MSP-to-LAX#!dates=Nov14,Nov17&group=1&s
    2015-01-31 07:14:41-0600 [-] [caldav-0]      elected_flights=96f6fbfd91,be8b5c748d;kind=flight&locations=MSP,LAX&dates=
    2015-01-31 07:14:41-0600 [-] [caldav-0]      Nov14,Nov16&group=1&selected_flights=96f6fbfd91,
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VEVENT
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:[email protected]
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-CALENDARSERVER-PERUSER-UID:D0737009-CBEE-4251-A288-E6FCE5E00752
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRANSP:OPAQUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     BEGIN:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACKNOWLEDGED:20121114T210756Z
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ACTION:AUDIO
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ATTACH:Basso
    2015-01-31 07:14:41-0600 [-] [caldav-0]     TRIGGER:-PT2H
    2015-01-31 07:14:41-0600 [-] [caldav-0]     UID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-APPLE-DEFAULT-ALARM:TRUE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     X-WR-ALARMUID:040C4AB7-EF30-4F0C-9D46-6A85C7250444
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VALARM
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERINSTANCE
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:X-CALENDARSERVER-PERUSER
    2015-01-31 07:14:41-0600 [-] [caldav-0]     END:VCALENDAR
    2015-01-31 07:14:41-0600 [-] [caldav-0]     ) in id: 3405
    2015-01-31 07:14:41-0600 [-] [caldav-0]    
    2015-01-31 07:16:39-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None
    2015-01-31 08:08:40-0600 [-] [caldav-1]  [AMP,client] [calendarserver.tools.purge#warn] Cleaning up future events for principal A95C9DB2-9757-46B2-ADF6-4DECE2728820 since they are no longer in directory
    2015-01-31 08:09:10-0600 [-] [caldav-1]  [-] [twext.enterprise.jobqueue#error] JobItem: 39, WorkItem: 762001 failed: ERROR:  canceling statement due to statement timeout
    2015-01-31 08:09:10-0600 [-] [caldav-1]    
    2015-01-31 08:13:40-0600 [-] [caldav-1]  [-] [txdav.common.datastore.sql#error] Transaction abort too long: PG-TXN</Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/ python2.7/site-packages/calendarserver/tools/purge.py#1032$_cancelEvents>, Statements: 5, IUDs: 0, Statement: None

    <facepalm>  Well, there you go.  It turns out I was over-thinking this.  The Calendar app on a Mac can manage this database just fine.  Sorry about that.  There may be an easier way to do this, but here's how I did it.
    Use the Calendar.app on a local computer to:
    - Export the corrupted calendar to an ICS file on the local computer (Calendar -> File -> Export -> Export)
    - Create a new local calendar (Calendar -> File -> New Calendar -> On My Mac)
    - Import the corrupted calendar into the new/empty local calendar (Calendar -> File -> Import...)
    - Delete years and years of old events, including the one that was triggering that error message
    - Export the (now much smaller) local calendar to another ICS file on my computer (Calendar -> File -> Export -> Export)
    - Create a new calendar on the server (Calendar -> File -> New Calendar -> in the offending server-based iCal account)
    - Import the edited/fixed/smaller/no-longer-corrupted calendar into the new/empty server calendar (Calendar -> File -> Import...)
    - Make the newly-created iCal calendar the primary calendar (drag it to the top of the list of calendars on the server)
    - Delete the old/corrupted calendar (right-clicking on the bad calendar in the calendar list - you can only delete it once it's NOT the primary calendar any more)

  • Real time logging: best practices and questions ?

    I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
    Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
    I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
    server access logs in real time.
    At a first glance, each directory generates about 1,1 Mb of access log per second.
    1)
    I'd like to know if there're known best practices / experiences in such a case.
    2)
    Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
    sub-sytem (SAN, NAS ....) ?
    3)
    In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

    Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
    Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
    Ramdisk on windows
    [http://msdn.microsoft.com/en-us/library/dd163312.aspx]
    Ramdisk on solaris
    [http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
    [http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
    I should ask, how realtime should this log correlation be?
    Edited by: etst123 on Jul 23, 2009 1:04 PM

  • Best approach to create a security environment in Java

    I need to create a desktop application that will run third party code, and I need to avoid the third party code from export by any way (web, clipboard, file io) informations from the application.
    Somethig like:
    public class MyClass {
        private String protectedData;
        public void doThirdPartyTask() {
            String unprotedtedData = unprotect(protectedData);
            ThirdPartyClass.doTask(unprotectedData);
        private String unprotect(String data) {
    class ThirdPartyClass {
        public static void doTask(String unprotectedData) {
            // Do task using unprotected data.
            // Malicious code may try to externalize the data.
    }I'm reading about SecurityManager and AccessControler, but I'm still not sure what's the best approach to handle this.
    What should I read about to do this implementation?

    Whilst code without any permissions (as supplied through the ProtectionDomain by the class' ClassLoader) cannot access network, file and system clipboard, this does not mean it is entirely isolated.
    Even modern cryptographic systems are surprisingly vulnerable to side-channel attacks.
    Where an untrusted agent has access to sensitive data, it isn't very feasible to stop any escape of that data. Sure, you can block off overt posting of the data, but you cannot reasonably block off all covert channels.
    Steganographic techniques are a particularly obvious way to covertly send sensitive data out amongst intended publications.

  • Best Approach to create Security / Authorization Schema for an APEX Apps

    Hi,
    I am planning to create a Security / Authorization Schema for an APEX Application.
    Just want to know what is the best approach to create the security feature in APEX, so that it should be re-used in other APEXApplications too..
    I am looking for following features...
    1. users LOGIN and then user's name is stored in APEX_USER...
    2. Based on the user, I want to restrict the Application on following levels.
    - TABS
    - TABS - Page1 (Report
    - Page2 (Form)
    - Page2 (Region1)
    - Page2 (Region1, Button1)
    - Page2 (Region1, Items,....)
    AND so on.....basically depending on user....he will have access to certain TABS, Pages, Regions, Buttons, Items...
    I know, we have to create the Authorization Schema for this and then attach these Authorization Schema to the different Level we want.
    My Question is, what should be the TABLE structure to capture these info for each user...where we will say...this USER will have following access...AND then we create Authorization Schema from this table...
    Also what should be the FRONT end, we should have to enter these detail...
    SO, wondering, lot of people may already have implemented this feature....so if guys can provide the BEST Approach (re-usable for other APEX Application)....that will be really nice..
    Thanks,
    Deepak

    Hi Raghu,
    thanks for the detial info.
    so that means..I should have 2 table...
    master table (2 columns - username, password)
            username    password
       user1       xxxx
       user2       xxxx2nd table (2 columns - username, chq_disp_option)
    - In this table, we don't have Y/N Flag you mentioned..
    - If we have to enter all the regions/tabs/pages in the Applications here or just those regions/tabs/pages for which are conditionally diaplayed.
    - so that means in all the Pages/Regions/tabs/items in the entire Application, we have to call the Conditionally display..
    - suppose we have 3 tabs, 5 pages, 6 regions, 15 items..that means in this table we have to enter (3+5+6+15) = 29 records for each individual users..
              username    chq_disp_option
       user1       re_region1
       user1       re_region2
       user1       tb_main
       user1       Page1
       user1       Page5
       ----        ----     - how you are defining unique name for Regions..i mean in static ID or the Title
    - is the unique name for tab & item is same as the TAB_NAME (T_HOME) & Item Name (P1_ITEM1) or you are defining somewhere else.
    Thanks,
    Deepak

  • Best approach to archival of large databases?

    I have a large database (~300 gig) and have a data/document retention requirement that requires me to take a backup of the database once every six months to be retained for 5 years. Other backups only have to be retained as long as operationally necessary, but twice a year, I need these "reference" backups to be available, should we need to restore the data for some reason - usually historical research for data that extends beyond what's currently in the database.
    What is the best approach for making these backups? My initial response would be to do a full export of the database, as this frees me from any dependencies on software versions, etc. However, an export takes a VERY long time. I can manage it by doing concurrent multiple exports by tablespace - this is able to be completed in < 1 day. Or I can back up the software directory + the database files in a cold backup.
    Or is RMAN well-suited for this? So far, I've only used RMAN for my operational-type backups - for short-term data recovery needs.
    What are other people doing?

    Thanks for your input. How would I do this? My largest table is in monthly partitions each in its own tablespace. Would the process have to be something like: alter table exchange partition-to-be-rolled-off with non-partitioned-table; then export that tablespace?

  • Regd. use of enterprise services : Best approach

    Hi Experts,
    I have configured a scenario using standard enterprise services,used soamanager of ABAP and tested the services,it works perfectly.
    But the issue came with terms of security ,it seems we cannot expose our sap system as url for the services.
    need to use PI to connect to the third party...I need suggestions on following
    1. Can we connect ABAP of PI to ABAP of SAP ,with some configuration , i dont want to create any additional
        structures is PI apart from the one which i imported.
    2.If i set up a soap to proxy(ABAP) then i need to duplicate the structure for source and use the standard ES
       service at proxy receiver, but i want to avoid duplication as it is cumbersome to do for all services.
    3.If i create communication /service user i need to give access for some of sap tables  to support the service functionalities.
    what is the best approach for exposing services from PI using standard enterprise service.
    Any pointer will be appreciated.
    Regards,
    Srinivas
    Edited by: Srinivas on Jul 7, 2010 7:29 PM

    Hi
    Following is answer of your question:
    >(1)How can I search relevant Enterprise services in PI for the SAP R/3 BAPI.
    There is no way to look for enterprise service for BAPI, ES are harmonized services based on GDT whereas BAPI is more SAP oriented in datatype definitions, only way is to find out is look into code as many ES are calling BAPI internally. Best way to identify correct ES is business use (like Purchase Order Creation etc)
    >(2)If relevant ES is available then what are the steps to be performed?
    It depends how you want to use this ES, you can call this ES from outside world (read thirdparty tools, application) and these services can be used as ready-to-use building block for new applications. You must know URL of wsdl and security setting (user/password) to use it with any application. You can call it from ABAP, .Net, Java applications. You can test ES with any SOAP testing tool like WSNavigator or SOAP UI.
    >(3)If relevant ES is not availabe then what are the steps to be performed?
    You have few options if ES is not available, design of your own by following proper governance model (i.e. PIC process) or you can live with existing BAPI, RFC and convert them into Web Service using webservice wizard available in SE80 and SE37.
    Regards,
    Gourav

Maybe you are looking for

  • As on date stock

    Dear Gurus, How can I get the stock quantity & valuation as on date  at info cube level. SP

  • Extracting a flat file from oracle table

    I have moved the knowledge module KIM ISO SQL to FileAppend from the Metadata to my project folder. But when I create an interface mapping the oracle table and a flat file on a different unix server, in the drop down menu , it shows only KIM SQL TO S

  • My ipod wont download my podcasts??!!

    i boutght an i pod form someone 2 weeks ago pretty mutch brand new except for some songs... well i deleted them put my suff on then tryed to put podcasts on but no luck... so i restored my ipod to factory settings and still no luck w/ podcasts i have

  • Color management hell?

    I could use some help here if any are willing. Here's the basic problem. I open (import) a TIFF or RAW image in Lightroom, it has a nice blue-purple (stormy) sky. I export it with sRGB color space as a JPEG and open it in IE or Windows Picture and FA

  • Attempted repair of 3.5.1 database yields message that it may "...remove data supported only in newer versions..."

    After starting Aperture with Command+Option keys, and being presented Photo Library First Aid, then choosing Repair Database radio button, then clicking on Repair --- I am presented with a dailogue:  "This library has been opened with a newer version