Best practice for LIV

Hi all
I am looking for the best practices for Logistics Invoice Verification or anything that is close to it. Pls help me.
Regards
Mui Kanva

Hi Kanva,
I got the below link and I am presenting the same for your based on your requirement.
http://download.sap.com/download.epd?context=5AF27FC1B77FC7E83D61729BF0342146F4273D826CBC3BE0EF818D37F1C81CB5F23129B7E2A9E5D39FD67390F744028B8AC227C9CBB766B9
Please confirm whether the link is helpful.
Regards
R. Senthil Mareeswaran.

Similar Messages

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • Best Practice for Conversion Workflow

    Hello,
    I'm converting video files from our "home grown" virtual media reserve to iTunes U. Some of the files are in RM format, some are already compressed .mov's (not H.264) and some I have the original DV files for.
    Anyone out there have a best practice for converting these file types for posting to iTunes U? I have Final Cut Studio (Compressor), QT Pro and Squeeze available to me.
    Any experience you have with this would be helpful.
    Thanks,
    Jeana

    For converting old files to a podcast compatible video and based on the machine you have, consider elgato turbo.264. It is a fairly priced "co-processor" for video conversion. It is comprised of an application and a small USB device with a encoder chip in it. In my experience, it is the fastest way to create podcast video files. The amount of time that you will save will pay for the device quickly (about $100). Plus it does batch conversion of any video that your system currently plays through QuickTime. it has all the necessary presets and you can create your own. It has a few minor limitations such as not supporting (at this moment) enhanced podcasting features such as chapter markers and closed-captions but since you have old files for conversion, that won't matter.
    For creating new content, the workflow varies a lot. Since you mention MP3s, I guess you are also interested in audio files. I would stick with GarageBand, especially if you are a beginner plus it supports enhanced podcasts.
    In any case the most important goal is to have the simplest and fastest way to go from recording to publication. The less editing the better. To attain that, the best methods will require the largest investments. For example, for video production the best way is to produce the content live so when you finish recording it is only a matter of encoding and publishing. that will require the use of a video switcher that can ingest at least one video camera and a computer output to properly capture presentation material. That's the minimum. there are several devices that can do this for you. Some are disguised PCs and some connect to a PC for tapeless recording. You can check the Tricaster, which I like but wish it was a Mac and not a Windows Xp PC. Other routes may include video mixers from manufacturers like Edirol, Pansonic or Sony connected to a VTR or directly to a Mac for direct-ti-disc capture. I f you look at some of the content available in iTunes U, you will see what I explain here. This workflow requires preparation and sufficient live support but you will have your material ready for delivery almost immediately after the recorded event. No editing required. Finally, the most intensive workflow is to record everything separately and edit it later, which is extremely time consuming.

  • Best practice for OSB to OSB communication

    Cross posting this message:
    I am currently in a project where we have two OSB that have to communicate. The OSBs are located in different security zones ("internal" and "secure"). All communication on a network level must be initiated from the secure zone to the internal zone. The message flow should be initated from the internal zone to the secure zone. Ideally we should establish a tcp connection from the secure zone to the internal zone, and then use SOAP over HTTP on this pre-established connection. Is this at all possible?
    Our best approach now, is to establish a jms-queue in the internal zone and let both OSBs connect to this. All communication between the zone is then done over JMS. This is an approach that would work, but is it the only solution?
    Can the t3/t3s protocol be used to achieve our goal? I.e. To have synchronous commincation over a pre-established connection (that is established the in opposite direction of the communication)?
    Is there any other approach that might work?
    What is considered best practice for sending messages from a OSB to another OSB in a more secure zone?
    Edited by: hilmersen on 11.jun.2009 00:59

    Hi,
    In my experience in a live project, we have used secured communication (https) between internal service bus and DMZ/external service bus.
    We also used two way SSL with customers.
    The ports were also secured by firewall in between them.
    If you wish more details, please email [email protected]
    Ganapathi.V.Subramanian[VG]
    Sydney, Australia
    Edited by: Ganapathi.V.Subramanian[VG] on Aug 28, 2009 10:50 AM

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • Best practice for licence server for RDS Farm & Certificate errors

    Hello,
    I am in the process of creating an RDS farm using Server 2008 R2.  I have three Session Hosts and a Connection Broker.
    I have a set of 10 user CALs available and also another 20 on our current RDS server which will need migrating once we go live with the farm.
    I understand the User CALs need to be installed on another Server 2008 R2 and I am wondering what is best practice.  We are running on an entirely virtual environment and it would be simple enough to create another server and install the CALs on there. 
    The only issue with that is that I would need to create a replica of this new machine for DR purposes, but this would take up valuable space which may not be necessary.
    We are planning on creating replicas of one of the Session hosts and the broker for DR, so I am guessing I would need to install some CALs on the Session Host which is going to be replicated.
    There are a few options and I am just wondering what is the best way to go about things.
    Also, as an aside, I am getting an annoying certificate error each time I log a test user onto the RDS farm - I think this is because I am using the DNS alias of the RDS Farm to log on. Is there an easy way to get around this, other than the 'Do not show
    this message again'. I have been doing some research and the world of Certificates is very confusing!!
    Thanks,
    Caroline
    C.Rafferty

    Hi Caroline,
    Firstly for your License related issue, you can perform the step on any VM or can create the new VM as replica for RDSH server also. But please be sure that you have installed RD License server on it, activate it and then install RDS CAL on it. But be safe
    if possible don’t install RD License server with RDCB, please make that out of it as little away. As you can also install RD License server with AD or make replica of that and install RDL on that.
    Best practices for setting up Remote Desktop Licensing (Terminal Server Licensing) across Active Directory Domains/Forests or Workgroup
    http://support.microsoft.com/kb/2473823
    What’s the specified certificate error which you are receiving?
    If you're going to allow users to connect externally and they will not be part of your domain, you would need to deploy certificates from a public CA. In meantime you can refer blog for getting insight for certificate case.
    Certificate Requirements for Windows 2008 R2 and Windows 2012 Remote Desktop Services
    http://blogs.technet.com/b/askperf/archive/2014/01/24/certificate-requirements-for-windows-2008-r2-and-windows-2012-remote-desktop-services.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best practice for real time requierement

    All,
    I am trying to find out what is the best practice for reporting against real time data ?
    is it while using
    1 - webi against universe/bex query on the top of hybrid cubes ?
    2 - Crystal report directly against the ECC data ?
    3 - using another solution such as Data Fedrator ? or something different ?
    I am looking to know if anyone got such req and also to share their experience 
    did they get some huge challenge against hybrid cubes ?
    Thanks in advance for your help
    Philippe

    Well their first requierement was to get real time data .. if i am in Xcelsius and click refresh then i want it to load my last data ..
    with live office , i can either schedule a crystal report and get the data delayed or use the option from live office to make iterfresh as right now .. is that a correct assumption ?
    I was talking about BW, just in case they are willing to change the requierement to go from Real time to every 5 min
    Just you know we are also thinking of the following option:
    1 - modify the virtual provider on the  CRM machine to get all the custom fields needed for the Xcelsius Dashboard
    2 - Build some interactive report on the top of these Virtual Provider within CRM
    3 - get the link to this report , it is one of the Report feature within CRM
    4 - design and build your dashboard on the top of it
    5 - EXport your swf file to the cRM web ui
    we are trying to see which one is the best one
    Philippe

  • What are the best practices for generating an EPS logo from InDesign?

    Our costomer is running into technical issues with the logo we sent them, which was exported from Indesign. Images were not embedded and fonts missing. I was able to embed the images and fonts. However, we DO NOT want them to be able to make any text changes. So after exporting an eps, I opened the file in Adobe Illustrator and made all the text outlines. I hope this works. But I just wanted to post the question on what are the best practices for doing this?
    The client needs the logo with transparent background, images emebdded and type in outlines. Also, they need some space around the text. When I exported the eps, the file is right up on the edge of the type.

    It sounds like you are pretty far from "best practice" with regard to logo design and delivery.
    These days, the very use of the EPS format should be considered bad practice, and some other terms in your post, (i.e., 'images,' 'missing fonts'), make it sound like there is not a seasoned logo designer involved.
    That said, you probably already got the advice you need to get out of the immediate jam. However, without proper logo design, you and the client will soon be facing other problems. You should be delivering a 100% vector graphic in single-color (black) and corporate-color(s) versions, with no live font data, that has been test-scaled to very small and very large sizes; ensuring it will work at postage-stamp size and on the side of a truck or building, with specific spot color(s) and proportions that will enable it to be offset printed, embroidered and screen-printed on apparel, and cut into signage materials and decals.

  • Best Practices for AP Power

    Can someone tell me or explain to me if there are best practices for AP Power based on data or voice networks?  Specifically, if you're tasked to do a Wireless Site Survey with the intention that RRM will not be used how would you configure the APs power for data usage only or voice and data.
    I have never read anything that states a best practice for power ... I tend to set APs at 6 or 12 mW for 2.4GHz and 12mW or 25mW for 5GHz ... but there isn't anything to state whether my practice meet any type of best practice so I'm just looking for any supporting facts or idea of what other Wireless Engineers use when they do site surveys.
    Any information or shared thoughts are much appreciated!!!
    Thanks,
    Mal

    Hello Mal,
    Your question will cause a number of responses from the different RF chefs here, Im sure!
    Ive been in WiFi for a long time. When I sit down with educated customers they will ask the same question. Based on my experience designing high availability, data intensive networks I follow the practice to design for the "lowest" common denominator.
    With this being said, its often a 5 GHz Phone device or even a 2.4 GHz Vocera badge. In any case, both devices live in or around 20mW. I like to design my networks at 12.5mW. I find this works better for RRM and also builds in fluff incase I need to adjust power on a AP.
    If a customer requires both a 2.4GHz and 5 GHz design. I will just design 5 GHz  @ 12.5 mW, as the 2.4 GHz network will surly fit inside of the 5 GHz design.
    I hope this helps...

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Is there a list of best practices for Azure Cloud Services?

    Hi all;
    I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
    opposed to when we go live.
    Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
    We will be placing the cloud services in multiple datacenters and using traffic manager to point people to the right one. The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client.
    Mostly it will pass all requests & responses across from one to the other.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    hi dave,
    >>Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
    For this issue, I have collected some blogs and document about best practices for azure cloud service, you can view them, but I am not sure they are your need.
    http://msdn.microsoft.com/en-us/library/azure/xx130451.aspx
    http://gauravmantri.com/2013/01/11/some-best-practices-for-building-windows-azure-cloud-applications/
    http://www.hanselman.com/blog/CloudPowerHowToScaleAzureWebsitesGloballyWithTrafficManager.aspx
    http://msdn.microsoft.com/en-us/library/azure/jj717232.aspxhttp://azure.microsoft.com/en-us/documentation/articles/best-practices-performance/
    >>The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client. Mostly it will pass all requests & responses across from one to the other.
    For your scenarioes, If you'd like to communicate with each instances, I recommend you refer to this document (
    http://msdn.microsoft.com/en-us/library/azure/hh180158.aspx ). And generally, if we want connect the client to server on Azure, the service bus is a good choice (http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-multi-tier-app-using-service-bus-queues/
    If I misunderstood, please let me know.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • ICMP Best Practices for Firewall

    Hello,
    Is there a such Cisco documentation for ICMP best practices for firewall?
    Thanks

    Hello Joe,
    I havent look for such a document but what I can tell you is the following?
    ICMP is a protocol that let us troubleshoot or test whether IP routing is good on our network or if a host is live on our network so I can tell you that from that perspective this is definetly something good (Not to mention some of the other good usage that we can provide to this protocol such for PATH MTU Discovery, etc).
    But you also have to be careful with this protocol as we all know it's also used to scan or discover hosts on our network.. Even to perform DoS attacks (Smurf attack, etc).
    So what's the whole point of this post:
    Well at least on my opinion I would allow ICMP on my network but I would definetly permit only the right ICMP code messages and I would protect my network against any known vulnerability regarding DoS attacks with ICMP, In this case I will still take advantage of the really useful protocol while protecting my enviroment,
    Hope that I could help
    For Networking Posts check my blog at http://laguiadelnetworking.com/
    Cheers,
    Julio Carvajal Segura

  • Is there a list of best practices for Azure websites?

    Hi all;
    I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
    opposed to when we go live.
    Websites are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Websites? If so, what are they?
    We will be the website in multiple datacenters and using traffic manager to point people to the right one. The website will run as a REST server using Web API 2, mostly for license checks from our app running on corporate Exchange servers. And a small part
    will be for javascript based web pages used for account CRUD.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    sorry I was not sure if you were using a web server or what?
    One other idea out of the slow cooker is to use a dedicated web server with SQL server running beside IIS
    If you are using the standard web sites, then the SQL option is about as slow as FTP uploads, very slow
    I have considered time of day too, and it seems Azure is less loaded early in the AM
    Corsair Carbide 300R with TX850V2<br/> Asus M5A99FX PRO R2.0 CFX/SLI<br/> AMD Phenom II 965 C3 Black Edition @ 4.0 GHz<br/> G.SKILL RipjawsX DDR3-2133 8 GB<br/> EVGA GTX 660 Ti FTW Signature 2 (GK104 Kepler)<br/> Asus PA238QR
    IPS LED HDMI DP 1080p<br/> ST2000DM001 &amp; Windows 8.1 Professional x64<br/> Microsoft Wireless Desktop 2000 &amp; Wacom Bamboo CHT470M<br/> <br/> <span style="color:red">Place your rig specifics into your
    signature like I have, makes it 100x easier to understand!</span><br/> <br/> <a href="http://hardcore-games.azurewebsites.net/" target="_blank">Hardcore Games</a> Legendary is the Only Way to Play!

  • Export and Deployment - Best Practices for RAR and CUP

    Hi Experts,
    I wanted to know what in your opinon is best practice for deployment for GRC for a 3 system landscape.
    We have a development landscape which connacts to all our environments - Dev-QA-Prod.
    Is it recommended to have just the production client connected to the prodiction boxes only and use Dev/ QA for other environments or is it a good idea to have Prod and QA in sync?
    In my opinion it looks like a good idea to have the same QA and PROD as it would make export easier.. Maybe I am worng..
    What according to you all is a good recommended practice here?
    Thanks,
    Chinmaya

    Hi Chinmaya,
    depends how many clusters you have in your landscape
    if it is something like 5 DEV box to connect 5 QAS boxes, so on
    then best practice will be to have separate DEV - QAS - PRD boxes for GRC  if money (h/w ) is no constraint for organization
    rather than later asking SAP for deletion scripts for deleting sandbox or dev connectors,
    best to have separate boxes for each
    also for future whenever you do rule changes in RAR and config changes in CUP, best to test in QAS first, as CUP will become very critical for your organization, post go-live
    and good part will be that management report will reflect true data for PRD only
    regards,
    Surpreet

  • Is there a list of best practices for Azure Sql Server?

    I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
    opposed to when we go live.
    Is there a list of best practices for Azure Sql Server? If so, what are they?
    We will be calling the database from multiple instances, both Azure websites and Azure cloud services. The calls will be about 90% selects, 9% updates, and 1% inserts.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    Hi,
    Here are some documentation on Azure SQL Database to get you started.
    http://msdn.microsoft.com/en-us/library/azure/ee336279.aspx
    http://msdn.microsoft.com/en-us/library/azure/ff394102.aspx
    http://msdn.microsoft.com/en-us/library/azure/jj156164.aspx
    http://msdn.microsoft.com/en-us/library/azure/ee336282.aspx
    http://msdn.microsoft.com/en-us/library/azure/ee336256.aspx
    Regards,
    Mekh.

Maybe you are looking for

  • HP LaserJet M2727nf incompatib​ility with Windows 8.1

    My  hard drive crashed and I ended up buying a new computer with a more recently released operating system.  I now have Windows 8.1.  My HP LaserJet M2727nf was working fine, but since I obtained the new operating system, the printer does not work.  

  • FRITZ!Box Fon WLAN 7390

    Hi All This is my first post so be gentle!! I was using my BTOpenreach modem with my Draytek 2830Vn+.  Unfortunately the Draytek has now been refunded due to a fault and I'm back on my HomeHub3. After much research I have found FRITZ!Box Fon WLAN 739

  • Images in image sequence are messed up

    hey guys, I'm having an odd problem. I have an image sequence with both landscape and portrait layout photos. I placed the sequence in motion and for some reason all the portait layout images and some of the landscape layout images are not centered i

  • How can tables across multiple pages be printed?

    I've got DIAdem 9.0. I have a table on a report that has so much data it spans several pages. I can scroll through the tables with in DIAdem REPORT but I can only print one page at a time. Does anyone know how to print all the pages from script? Is t

  • No Poster Frame Function in new Quicktime 10?

    Just updated to snow leopard and trying to set poster frames in the new quicktime player. One use to be able to do this quite easily by moving the playhead to the frame you wanted and then selecting "set poster frame" in the pull down menus. It seems