Best Practice for Distributed TREX NFS vs cluster file systems

Hi,
We are planning to implement a distributed TREX, using RedHat on X64, but we are wondering which could be the best practice or approach to configure the "file server" used on the TREX distributed environment. The guides mention file server, that seems to be another server connected to a SAN exporting or sharing the file systems required to be mounted in all the TREX systems (Master, Backup and Slaves), but we know that the BI accelerator uses OCFS2 (cluster file systems) to access the storage, in the case of RedHat we have GFS or even OCFS.
Basically we would like to know which is the best practice and how other companies are doing it, for a TREX distributed environment using either network file systems or cluster file systems.
Thanks in advance,
Zareh

I would like to add one more thing, in my previous comment I assumed that it is possible to use cluster file system on TREX because BI accelerator, but maybe that is not supported, it does not seem to be clear on the TREX guides.
That should be the initial question:
Aare cluster file system solutions supported on plain TREX implementation?
Thanks again,
Zareh

Similar Messages

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best practices for deploying forms in a 'cluster'?

    Anyone know of any public docs that discuss typical best practices for
    - forms deployment;
    - forms apps management and version control; and/or
    - deploying (and keeping) the .frm/frx in sync when using multiple forms servers in a HA or load balancing envrionment?

    Hi adil,                      
    Based on your description, you want to know the best practices for search service in a SharePoint farm.
    Different farms have different search topologies, for the best search performance, I recommend that you follow the guidance for small, medium, and large farms.
    The article is about the guidance for different farms. 
    Search service can run with other services in the same server, if condition permits and you want to have better performance for search service and other services including BI performance, you can deploy search service in dedicated server.
    If condition permits, I recommend combining a query component with a front-end Web server to avoid putting crawl components and query components on the same serve.
    In your SharePoint farm, you can deploy the query components in a WFE server and the crawl components in an application server.
    The articles below describe the best practices for enterprise search.
    https://technet.microsoft.com/en-us/library/cc850696(v=office.14).aspx
    https://technet.microsoft.com/en-us/library/cc560988(v=office.14).aspx
    Best regards      
    Sara Fan
    TechNet Community Support

  • Best practice for importing non-"Premiere-ready" video files

    Hello!
    I work with internal clients that provide me with a variety of differnet video types (could be almost ANYTHYING, WMV, MP4, FLV).  I of course ask for AVIs when possible, but unfortunately, I have no control over the type of file I'm given.
    And, naturally, Premiere (just upgraded to CS5) has a hard time dealing with these files.  Unpredictable, ranging from working fine to not working at all, and everything in between.  Naturally, it's become a huge issue for turnaround time.
    Is there a best practice for preparing files for editing in Premiere?
    I've tried almost everything I can think of:  converting the file(s) to .AVIs using a variety of programs/methods.  Most recently, I tried creating a Watch Folder in Adobe Media Encoder and setting it for AVI with the proper aspect ratio.  It makes sense to me that that should work:using an Adobe product to render the file into something Premiere can work with.
    However, when I imported the resulting AVI into Premiere, it gave me the Red Line of Un-renderness (that is the technical term, right?), and had the same sync issue I experienced when I brought it in as a WMV.
    Given our environment, I'm completely fine with adding render time to the front-end of projects, but it has to work.  I want files that Premiere likes.
    THANK YOU in advance for any advice you can give!
    -- Dave

    I use an older conversion program (my PrPro has a much older internal AME, unlike yours), DigitalMedia Converter 2.7. It is shareware, and has been replaced by Deskshare with newer versions, but my old one works fine. I have not tried the newer versions yet. One thing that I like about this converter is that it ONLY uses System CODEC's, and does not install its own, like a few others. This DOES mean that if I get footage with an oddball CODEC, I need to go get it, and install it on the System.
    I can batch process AV files of most types/CODEC's, and convert to DV-AVI Type II w/ 48KHz 16-bit PCM/WAV Audio and at 29.97 FPS (I am in NTSC land). So far, 99% of the resultant converted files have been perfect, whether from DivX, WMV, MPEG-2, or almost any other format/CODEC. If there is any OOS, my experience has been that it will be static, so I just have to adjust the sync offset by a few frames, and that takes care of things.
    In a few instances, the PAR flag has been missed (Standard 4:3 vs Widescreen 16:9), but Interpret Footage has solved those few issues.
    Only oddity that I have observed (mostly with DivX, or WMV's) is that occasionally, PrPro cannot get the file's Duration correct. I found that if I Import those problem files into PrElements, and then just do an Export, to the same exact specs., that resulting file (seems to be 100% identical, but something has to be different - maybe in the header info?) Imports perfectly into PrPro. This happens rarely, and I have the workaround, though it is one more step for those. I have yet to figure out why one very similar file will convert with the Duration info perfect, and then a companion file will not. Nor have I figured out exactly what is different, after running through PrE. Every theory that I have developed has been shot down by my experiences. A mystery still.
    AME works well for most, as a converter, though there are just CODEC's, that Adobe programs do not like, such as DivX and Xvid. I doubt that any Adobe program will handle those suckers easily, if at all.
    Good luck,
    Hunt

  • BEST PRACTICE FOR THE REPLACEMENT OF REPORTS CLUSTER

    Hi,
    i've read the noter reports_gueide_to_changed_functionality on OTN.
    On Page 5 ist stated that reports cluster is deprecated.
    Snippet:
    Oracle Application Server High Availability provides the industry’s most
    reliable, resilient, and fault-tolerant application server platform. Oracle
    Reports’ integration with OracleAS High Availability makes sure that your
    enterprise-reporting environment is extremely reliable and fault-tolerant.
    Since using OracleAS High Availability provides a centralized clustering
    mechanism and several cutting-edge features, Oracle Reports clustering is now
    deprecated.
    Please can anyone tell me, what is the best practice to replace reports cluster.
    It's really annoying that the clustering technology is changing in every version of reports!!!
    martin

    hello,
    in reality, reports server "clusters" was more a load balancing solution that a clustering (no shared queue or cache). since it is desirable to have one load-balancing/HA approach for the application server, reports server clustering is deprecated in 10gR2.
    we understand that this frequent change can cause some level of frustration, but it is our strong believe that unifying the HA "attack plan" for all of the app server components will utimatly benefit custoemrs in simpifying their topologies.
    the current best practice is to deploy LBRs (load-balancing routers) with sticky-routing capabilites to distribute requests across middletier nodes in an app-server cluster.
    several custoemrs in high-end environments have already used this kind of configuration to ensure optimal HA for their system.
    thanks,
    philipp

  • Symantec Antivirus Best Practice for Hyper-v 2012 R2 Cluster

    Hi Team ,
    I am Working with Hyper-v 2012 R2 Cluster with 5 Node . All are working fine .
    Now i am Planning to install Antivirus Symantec in to each node. Please let me know if there is any Best practice guide to install Symantec Antivirus for Hyper-v 2012 R2 Cluster node.
    I am using full version of Windows 2012 R2 with hyper-v.
    Thanks
    Ravi
    Ravi

    I would also look strongly at no antivirus as well, but if you do, here are the recommended exclusions and possible issues with Anti-Virus to look out for.
    Look for the Hyper-V section:
    http://social.technet.microsoft.com/wiki/contents/articles/953.microsoft-anti-virus-exclusion-list.aspx
    Big things to stay away from from a VM level too:
    1. Do not have a set time when all VMs will kick off a scheduled full disk scan.  This can create an I/O storm on your hosts as well as saturate the CPUs as well.   Look for products or settings that allow for randomization of full disk scans on
    the VMs or do not do full disk scans and only keep the real time scanners active on the VMs for incoming writes.
    2.  Watch for Antivirus products that update all the VMs at the same time.  Again, sometimes you can randomize or exclude a scheduled full disk scan, but sometimes an automated update that kicks off say at 12am can then automatically kick off a
    mini scan.   This can also create disk I/O and CPU storms.
    The problem that still exists with many Antivirus products today is that they try to scan as fast as they can and then get out of the way.  This works ok for endpoint desktops or laptops, but when you have 50 or more VMs on a host all ramping up trying
    to get done as quick as they can, then this can cause some issues. 
    Rob McShinsky (www.VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

  • Oracle Best Practices for generating Transactions IDs in high OLTP systems

    We are in the process of designing a high OLTP system using Oracle 11g Database with the following NFRs:
    1) 1 million transactions per day
    2) 100,000 concurrent users
    There are close to about 160-180 entities in the database and we want to know the best approach/practice in deriving the transaction IDs for the OLTP system. Our preferences are given below:
    1) Use Oracle Sequence starting with 1,000,000,000 (1 billion) - This is to make the TXN ID look meaningful when it starts with 1 billion instead of starting it with 1.
    2) Use timestamp and cast it to number instead of using Oracle sequence.
    Note: Transaction IDs must appear in sequence as they are inserted - be it sequence/timestamp
    I would like to know pros/cons of the above methods and their impacts on performance. Also, appreciate if you could share any any best practices/methods that Oracle supports.
    Thanks in advance.
    Ken R

    Ken R wrote:
    I did a quick PoC using both Oracle Sequence & Timestamp for 1 million inserts in a Non-RAC environment. Code used is given below:
    create sequence testseq start with 1 cache 10000 order;
    create table test1 (txnid number, txndate timestamp(9));
    create table test2 (txnid number, txndate timestamp(9));
    begin
    for i in 1..1000000
    loop
    insert into test1 values(testseq.nextval,systimestamp(9));
    end loop;
    commit;
    end;
    begin
    for i in 1..1000000
    loop
    insert into test2 values(to_number(to_char(systimestamp(9),'yyyymmddhh24missff9')), systimestamp(9));
    end loop;
    commit;
    end;
    Here are the results:
    select max(txndate)-min(txndate) from test1;
    Result >> 0 0:3:3.514891000
    select max(txndate)-min(txndate) from test2;
    Result >> 0 0:1:32.386923000
    It appears that Timestamp is faster than sequence... Any thought is highly appreciated...Interesting that your sequence timing is so slow. You say this was a non-RAC environment, but I wonder if you had Oracle linked in RAC mode even though you were running single instance - this would result in the ORDERed sequence running through RAC's "DFS Lock Handle" mechanism which might account for the timing anomaly.
    Unfortunately your test is not particularly relevant. As DomBrooks points out there are lots of problems with sequence-based or time-based columns, especially in RAC, and most particularly if you think you want a "no-gap" sequence. On top of this, of course, your test doesn't include an index on the relevant column, and it's single user and doesn't test for any concurrency effects.
    Typical performance problems are: your RAC instances spend all their time negotiating who gets to use the next value; the index you use to enforce uniqueness suffers from massive contention on the "high-value" block unless you create a reverse-key index - at which point you have to be able to cache the entire index to minimise I/O overheads; you can hash partition the index to avoid using the reverse-key option - but that costs a lot of money if you don't already license the partitioning option.
    Regards
    Jonathan Lewis

  • Best practice for reducing the number of XML files in an IDML file for translation?

    Our engineering team is looking for ways for us to reduce the number of XML files produced when a (relatively) simple 2-page INDD file is saved out as IDML for translation?

    IDML contains quite a few XML files, but I suspect you're only interested in the Stories folder if you're working on a translation. The way to do that is... to reduce the number of stories. If it's a two-pager, chances are that you have a whole bunch of unthreaded text frames. Thread them in logical reading order. This will help the translator(s) as well - by threading frames in logical reading order, they don't have to work to read the document in the same order as the target audience.

  • TestStand best practices for distribution

    We're looking for best practices for distributing TestStand
    systems.  I've found the TestStand
    Style Guide but it's a little sparse on how to set up the
    distributed systems.  We're looking for guidelines on where to put
    configuration data, where to put sequence files, how to manage users,
    and similar. 
    We'll be distributing systems to various contract manufacturers in
    China as well as using the systems in multiple locations in-house.
    What have you done with distributions and what problems have you seen?
    Right now we're planning to separate the Deployment Engine from our
    sequence files and putting all our configuration into our distribution
    kit for the Deployment Engine.

    The TestStand Reference manual provides good information on system
    distribution.  Chapter 14 : Deploying TestStand Systems covers the
    necessary information for distributing your TestStand
    application.  I do have a few suggestions and caveats:
    (1)  Make sure you use a Work Space when distributing your
    files.  A Work Space makes it easy to package all of your files
    and dependencies.  Moreover, the distribution wizard provides a
    feature that displays all the files that will be included in the
    installation package into an easy to use Tree View.
    (2)  TestStand currently does not look for embedded DLL
    dependencies.  So, if your code is calling a DLL module that calls
    a DLL module, be sure to include the embedded dependency DLL in your
    WorkSpace.
    (3)  StationGlobals and Custom Data Types are usually missed in
    installation.  Be sure that you are including your
    StationGlobals.ini file and MyTypes.ini file from the Cfg directory
    into your installation workspace.
    If you have more specific questions, please feel free to post them here!
    Good Luck!
    Tyler Tigue
    Applications Engineer
    National Instruments

  • Best practice for exporting a dps folio so a third party can work on it?

    Hi All,
    Keen for some thoughts on the best practice for exporting a dps folio, indd files, links and all, so a third party can carry on the work. Is their a better alternative to packaging each individual indd file and sharing the folio through adobe?
    I know there have been similar questions here in the past, but the last (that i've found) was updated mid 2011 and I was wondering if their have been any improvements to this seemingly backwards workflow since then.
    Thanks in advance!

    Nothing better than packaging them and using Dropbox to share. I caution you
    on one thing as far as packaging:
    http://indesignsecrets.com/file-packaging-feature-can-cause-problems-in-dps-
    workflows.php

  • IS RAW DEVICES SUPPORTED OVER A CLUSTER FILE SYSTEM

    Can raw partions be defined for datafiles after having choosen Cluster file system as storage option for database while creating fresh database using
    DBCA?

    > Do update on how the partitions have to be defined in either cases?
    For both ASM and OCFS, a partition must exist on the disk - it can be of any partition type. Does not matter. Simply that the s/w references a partition and not an entire disk.
    So for example, /dev/sdaf and dev/sdag are two shared devices on the cluster (LUNs on the SAN or whatever).
    You create a partition on each. E.g
    # fdisk -l /dev/sdaf
    Disk /dev/sdaf: 36.5 GB, 36573020160 bytes
    255 heads, 63 sectors/track, 4446 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdaf1 1 4446 35712463+ 83 LinuxTo use the first device as a OCFS device, you need to build an ocfs file system on it using mkfs.
    And then it can be mounted as a "normal" cooked file system mount. Remember that /etc/fstab needs to be updated for mounting it on startup.
    To use the second device for ASM, you have two choices. If you have the ASMlib kernel module installed, you can use that to configure a volume label and assign it for use by ASM.
    Alternatively, you simply map the device (partition) to a raw device for detection by ASM. E.g.
    # raw /dev/raw/raw1 /dev/sdag1Of course, you also need to make this permanent by updating the raw device list config file so that this mapping is performed on reboot. On Linux, this is the /etc/sysconfig/rawdevices file. Also remember that the user and group access for the logical raw device created, must allow ASM full access to it (e.g. use chmod oracle.dba /dev/raw/raw1).
    In a nutshell, this is how to raw devices are used as ocfs and asm volumes. (on RHEL specifically, but I expect no major differences in this approach on other o/s's)

  • Best practice for using messaging in medium to large cluster

    What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
    I will be glad to hear any suggestion or to learn from others experience.
    Shimi

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Microsoft best practices for patching a Cluster server

    Good morning! I was wondering if you had any web resources (Webcasts) or whitepapers on Microsoft best practices for patching a Cluster server? I will list what I seen online; the third one was very good:
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Failover Clusters in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/ff182338(v=ws.10)
    Patching Windows Server Failover Clusters
    http://support.microsoft.com/kb/174799/i

    Hi Vincent!
    I assume this step-by-step guide can also be used if you have more then 2 nodes, as long as you make sure majority of nodes are up (and quorum disk is available).
    I just had a strange experience during maintenance of 2 nodes (node nr 7 and nr 8) in a 8 node hyper-v cluster R2 SP1 with CSV. I used SCVMM2012 to put the nodes in maintenance mode. (live migrating all resources to other nodes.) I then look in "Failover cluster
    manager" to check that the nodes had been "Paused". And yes everything was just fine. I then did windows update and restartet, no problem. But after restart I wanted to run PSP (HP's update utility) to update some more drivers,software etc. During this PSP
    update, node nr 02 suddenly failed. This node is not even a HP Blade, so I'm not sure how, but I know network NIC drivers and software where updated from PSP. So my question is:
    Does changes in "Network Connections" on nodes in "Pause" mode affect other nodes in the cluster?
    The network are listed as "Up" during Pause mode, so the only thing I could think of is that during  PSPs driver/software update, NICs on node 07 and 08 were going down and up differently somehow making Node 02 fail.
    So now during maintenance (Vendor driver/software/firmware updates, not MS Patches) I first put the node in "Pause" mode then I stop cluster service, (and change it to disabled) making sure nothing can affect the cluster.
    Anders

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

Maybe you are looking for

  • Issue while FTP the report from BIP

    Hi, I am trying to FTP a report from BIP to a shared mount point. The report is completing fine and the history says that the FTP has also completed successfully.But when I go to the FTP location, the file is not there. It is happening only with one

  • Mapping proc output to vars gets error extracting result into a variable of type (DBTYPE_UI2)

    Hi, we run std 2012.  I have a proc (sets nocount on) whose params r shown in the first block .   My execute ssis sql task mapping is shown in the block following that (same order as shown, the param sizes are all -1).  The variable characteristics a

  • Out of process memory

    When I doing a bulk insert, following error occurs, Please provide me a Solution. ERROR at line 1: ORA-20009: Error on Procedure temp_bill_calls ORA-04030: out of process memory when trying to allocate 768 bytes (Typecheck heap,seg:kggfaAllocSeg) ORA

  • Restriction of inactive items using named search

    Hi experts, I am working in SAP MDM and facing following issues. In catalog displayed in shopping carts, when i need to remove inactive items (items with inactive value in item status field) from that catalog i simply remove the mask for those items.

  • Is there any restriction on Review transaction screen to correct expenditur

    Hello Experts, How to correct the error by review transaction screen; "Task Level Transanction control has been violated" while importing transaction using transaction import process. I can able to find out the set of data by using Transaction source