Avoiding XFS fragmentation

(sorry if this is the wrong forum, I guess Multimedia would be appropriate as well)
My /media disk is formatted with XFS, and houses mainly MythTV recordings, videos, music and photographs. Lately I had noticed that MythTV performance was getting abysmal - more often than not, when watching live TV and a new program would start, the playback would freeze and I'd have to exit live TV and restart watching to see the new program. I knew I was low on disk space, but MythTV deletes recording as needed so that shouldn't be the issue. But then I noticed that deleting files from /media took a really, really long time - XFS is known for very speedy deletions, so obviously something was wrong.
This was unheard of. With updates Arch has continuously been getting better performance; while there are regressions every now and then, they're not permanent and certainly nothing in this scale. Then it finally hit me... could it be an issue that I have all but forgotten, fragmentation? After some googling it turns out this really could be the case, and sure enough, the drive was >98% fragmented. After several hours of xfs_fsr the performance was restored, now deleting files of several gigabytes happens instantly again. It also turns out this was a case of PEBKAC - the drive housing my / partition had died a while back, and now /media was in fstab with "noatime,defaults"; I had forgot to define allocsize, and the default allocsize (64kb, AFAIK) is really unsuited for MythTV.
But this got me wondering, what would be the ideal allocsize? MythTV wiki reccommends 512m, which probably is correct for the recordings and video files as well. And ever since I discovered Spotify (which thankfully runs very well under Wine) my music collection has been updated very infrequently. But I update /media/images from my digicam quite often, and obviously even at 8Mpix the images are less than 512MB.
So what's your opinion, should I use 512m, or a smaller value? Or should I just use a different drive for images/music and dedicate this one to recordings and video?
Last edited by alanmies (2009-10-24 12:41:52)

allocsize=512m for mythtv is perfect if the primary use of that partition is indeed recordings.

Similar Messages

  • What is Fragmentation, how to avoid this?

    Dear All,
    Please help me to get the complete documents on fragmentation and the solutions(remedies) to fragmentation in Oracle 9i and 10g.
    Thanks,
    Mahipal Reddy

    > So, how can i avoid this fragmentation?
    There is no such fragmentation on Oracle 10G when tablespaces are created as locally managed.
    In basic terms - you will only get an extend failing when the tablespace is between 99.9% and a 100% full. This is unlike Oracle v7 for example where there could be 20% free tablespace, but it is unusable.
    In other words - there are no tablespace fragmentation problems with 10G.
    Table data block fragmentation? There is not really fragmentation. There are simply used data blocks below the high water usage mark that can still be used for new rows. There are used data blocks with used space above the high water that cannot be used for new rows. The PCTUSED and PCTFREE parameters govern these high and low water marks.
    The important factors here are data block size, sizes of rows and the sizes of the PCTFREE and PCTUSED. An additional factor could be how bulk inserts are handles for example (as direct path inserts or not).
    Having a table that has allocated for example 1GB of space, and is only using 800MB of that space, is not an error or a problem or a sign of fragmentation. It is to be expected. And that 200MB space not used at the moment, will be used for new inserts or updates - depending on whether those data blocks are on the free list for that table or not.
    I suggest that you read about Oracle space management in the [url http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/logical.htm#i8531]Oracle® Database Concepts guide.

  • IP Fragments on FWSM

    During our network load testing one of our fwsms drops IP fragments even after changing fragment size and chain values.
    How do we avoid the fragmented packet drops on FWSM?
    Below is the output of the fragmented packets being dropped.
    FWSM# sho np 3 flo sta
    Flow Control: Rate Limit Statistics
    GF Dropped : 0
    Syslogs Dropped : 0
    Route Packets Dropped : 0
    ARP Packets Dropped : 0
    Fornax Server Packets Dropped : 0
    Fornax Client Packets Dropped : 0
    Other IP Packets Dropped : 0
    L7 Fixup Packets Dropped : 0
    NP3 Fixup Packets Dropped : 0
    ARP/L2 Indications Dropped : 0
    Other Indications Dropped : 0
    NP1 Sessions Dropped : 0
    NP2 Sessions Dropped : 0
    NP3 Sessions Dropped : 0
    IP Fragments Dropped : 8968640
    Packets to CP Dropped : 0
    FWSM#

    Though the same problem is with the ASA and PIX all the way to the 8.2 software level. In software level 8.3 and above you can define "object network " and "range" inside it. You can then group the "object network" inside an "object-group network" if you want to group multiple ranges in one object. The "object network" can only hold a single host/subnet/range.
    What is the exact situation where you want to use an IP range?
    What are you trying to do for the hosts in the IP range?
    Maybe there is some alternative way to go about it. But I admit that its a problem. There are some other "object-group" related problems or missing functionality that is making life hard for some firewall admins.
    - Jouni

  • What is the difference between Table & Tablespace Fragmentation

    What is the difference between Table Fragmentation & Tablespace Fragmentation.
    What causes Table Fragmentation and what cause Tablespace Fragmentation.
    How can we avoid Table Fragmentation & Tablespace Fragmentation.
    How can we fix already Fragmented Tables & Fragmented Tablespaces
    Thanks
    Naveen

    Unless you are using an exceptionally old version of Oracle or are still using dictionary managed tablespaces or are using some interesting definitions of "fragmentation", fragmentation is practically impossible in Oracle.
    Justin

  • Objects (table, index, partition) fragmented

    Hi all,
    I run the script bellow to find fragmented objects:
    select segment_name, segment_type, count(*) from dba_extents
    where owner = ownname
    group by segment_name, segment_type
    having count(*) > 1
    order by 3
    And i get some liste like this:
    object_names object_types 249
    My question is the objects that i have are fragmented and what can i do to solve that fragmentation i mean to avoid object fragmentation.
    regards
    raistarevo

    Hi,
    You can go to the 8i documentation for more information:
    http://download-west.oracle.com/docs/cd/A87860_01/doc/appdev.817/a76936/dbms_s2a.htm#1004668
    But I advice you to read this thread below first.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:8381899310385
    Cheers

  • HOW can i delete the members in dim using MAXL

    Hi All,
    I want to delete the all the memebers for dimensions before reloading so how ca n i write the maxl script for this to delete the members in the dimensions
    plz can any one help on this its very urjent ,
    It would be appriciated ...
    Thanks

    There is no way to alter an outline with MaxL, MaxL is an admin language for processing. The only option, as Glenn mentioned is to issue the reset all command to blow away everything. But you probably don't want to do that, so instead you use Maxl to run a series of load rules to achieve your goal.
    Load rules do not have functionality to delete outline members, it does have a feature however that allows you to remove unspecified members from a dimension build, so when loading in your members, if existing members are not in the import, they are removed. So Glenn's point was to create a source file with a single dummy member. When you load using "Remove Unspecified" it will delete all the members and just keep the one dummy member. Then when you go back a second time and actually load the real members, the dummy member will be removed and you will have a clean dim with only your new members. This is a work around solution, but it's what a lot of people do.
    Now if you have 15 dimensions you need to do this with, you might want to take a slightly different approach. I'm assuming 15 dimensions is probably an ASO cube and this method will be preferable to avoid outline fragmentation.
    Create a copy of your outline with all the base dimensions that do not change intact. For the ones that do change, just have your dim root member. Consider this your starting outline template and save a copy of it. When you want to rebuild your cube have a server process that takes the copy of your template outline and copies it over your existing outline file in the directory structure. Then run your load rules to import your new members.

  • In need of greater clarity on RAM and 64 Bit versus 32 Bit Apps...

    i have 13 GB of RAM in my MacPro, and i am running Leopard. i noted, however, that Adobe Photoshop only appears to recognize 3072 mb of RAM (3 Gb) as it is still a 32 bit program. 32 bit programs can still only recognize a max of 3 Gb of memory.
    i read somewhere that while Photoshop does not recognize all the RAM, that performance is increased even so, if you have more than 3 Gb of RAM; which would make sense as you can allocate that much to Photoshop and still have left over RAM for your system and other apps.
    When i noticed that Photoshop did not recognize all my RAM i was extremely disheartened. After all, i did splurge quite a bit on the RAM in hopes of dramatically increasing performance in Photoshop.
    But, as i think about it more, while Photoshop can only see 3 Gb of RAM, having 13 GB means that i can run more RAM intensive programs and each program will have enough RAM available. But i wonder what that means in terms of keeping up with the processors?
    i have been in the habit of closing most programs when working in a RAM intensive program such as Photoshop to improve performance. But, i am thinking that i no longer have to really worry about this?
    i would appreciate a discussion of this to add some clarity to the 32 bit/64 bit issue and RAM allocation on the MacPro.

    Hello Gene.
    My understanding is quite the same, 32bit programs theoricaly can't allocate more than 4G of RAM, maybe the 3G that you are talking about are anoter level of limitation, or caused by some allocations mechanisms beyond my knowledge.
    When using your computer, take a look at Activity Monitor, and see how much RAM is free. As long as you have enough RAM, there is no need to close applications, since the memory manager will not start swapping on disk. Well, that said, I think it may improbe performance to close application to avoid memory fragmentation, but the impact is not the same.
    And you're right, it is useless to have that much RAM for a single processus, but since Mac OS X has a pretty good multitasking, you can run lot of processes, each one allowed to use 4G of RAM, so that you keep performances even with lots of big applications launched.
    Now, be patient, and wait for a full 64bit version of Photoshop to really enjoy your RAM
    P.S. Maybe some plugins can detach new processes and get their own allocation space for some heavy filter operation ? If someone know, please clarify, that would be interesting.

  • Cannot format hard drive on macbook pro.

    Hi, I am trying to run bootcamp and install Windows but am having no luck. When I try to partition the drive through bootcamp it tells me that it cannot be partitioned because some files cannot be moved. The message also says to format the drive after backing it up and partition after that. I have backed up my drive and am trying to erase the disk but the erase disk option is grayed out when I click on my disk to be formatted.
    Also, I tried to partition the hard drive without using bootcamp, and it will not allow me to format the partition to FAT, or anything other than OSX Extended Journaled.
    What am I doing wrong? Also, if I try to verify the disk for repairing it, Disk Utility freezes and I have to force quit it. My only thought in reading this is that maybe I should partition through Disk Utility if it will work, format to OSX Extended, and then format the partition to NTFS after. Is this the best way to do this.

    plorelle wrote:
    Hi, I am trying to run bootcamp and install Windows but am having no luck. When I try to partition the drive through bootcamp it tells me that it cannot be partitioned because some files cannot be moved.
    this happens because your drive is too fragmented and bootcamp assistant can't find a contiguous piece of free space to make the bootcamp partition.
    The message also says to format the drive after backing it up and partition after that. I have backed up my drive and am trying to erase the disk but the erase disk option is grayed out when I click on my disk to be formatted.
    What am I doing wrong?
    you can't erase a drive while you are booted from it. how did you back up the drive? you need to make a bootable clone and test that it works. boot from the clone and repartition (not erase) your main hard drive using the partition disk utility. then clone the clone back. boot from the internal and try using bootacmp assistant again. keep in mind that you need to leave plenty of free space on the OS X partition after you create the bootcamp partition. at least 15% of the OS X partition should be free to avoid disk fragmentation issues.

  • Best practice for breaking a book

    Hi Everybody,
    I'm making a long technical manual in InDesign, and am facing the issue of how and when breaking it into separate ID Documents collected into an ID Book. In the past, I've never used ID for manuals longer than 40 pages, where I had no problems with the document size. My latest manual was broken this way:
    - Front cover
    - Front matter/Safety
    - ToC
    - Main Text
    - Back cover
    With this longer manual, I will have to split the Main Text into separate parts. My preference would be to avoid excessive fragmentation, to make exporting for translators or other use less time consuming. Also, having less Documents in the Book would make mass replacement (Find/Replace > All Documents) much easier.
    So, at the moment I would avoid breaking by chapter, but going for major sections, like in this schema:
    - Front cover
    - Front matter/Safety
    - ToC
    - Main Text / Introduction
    - Main Text / Basic tasks
    - Main Text / Less basic tasks
    - Main Text / More or less advanced tasks
    - Main Text / Absolutely advanced tasks
    - Main Text / Stay away from these tasks
    - Main Text / Appendix
    - Index
    - Back cover
    Do you see any issue in breaking into sections instead of separate chapters?
    Thank you!
    Paolo

    While going on with this work, I would say that it is important to find a balance between a document too big, and too many chapters in a book. A big document will be more difficult to navigate, and will be slower to use. Too many chapters will make changing condition status change and other mass operation slower.
    Also, be careful not to create cross-references before breaking the book in separate chapters. There is no apparent way of automatically relinking broken references, that have to be entered again by hand.
    Paolo

  • Photoshop CS5 very slow performance, MacBook Pro i7

    Hi, I've noticed some posts on slow screen redraw, but my issue seems to be with specific performance with web based files having many layers and layer groups.
    A sample file is a 27mb psd. It has lot of layers (525), mostly type layers (limited to 3-4 typefaces in the file) and layer groups. If I select a particularly large layer group and try to move it on the canvas, I get the Adobe throbber cursor icon for at least 20 seconds. Even small layer groups bring up the throbber cursor for a few seconds. Overall, things seem much slower than CS4. Not good! And I'm just speaking of basic operations, not hitting intensive tools or major filters yet.
    I've looked at the site below, but not much information except the option to purchase some scripts to 'warm-up' Photoshop. After what I spent on Design Premium, I don't think I should have to purchase third party scripts to warm it up. (I did experiment with the tiles option under performance pref's, but upping that made opening files take 2 minutes, where before opening was achieved in 10-15 seconds.
    http://macperformanceguide.com/index_topics.html#OptimizingPhotoshopCS5
    I have a brand new MacBook Pro i7, 8GB, 7200 internal drive. I installed everything from a fresh install of Snow Leopard. CS4 has never been installed on this Mac.
    Any thoughts?

    if you have 4GB RAM, the system would need about 1-2GB,
    so if you quit all other apps and only run PS, it has 2-3GB RAM left.
    you can set RAM-usage in percentage, in the settings menu of photoshop,
    i set it to 70%, the rest is for osx.
    so lets say PS uses 2GB RAM in your case, the more hires images you open at once and the more layers they have,
    and the bigger they are, the 2GB RAM are gone and PS will use the scratch disc for the mentioned images you opened at once, which is also configured in the settings menu. usage of a scratch disc (PS writes data to the scratch disc permanently) will slow down overall performance, it may lagg and everything is slower than before (in my experience). best for scratch disc usage is a separate, fast, internal disc, as huge as possible (50-250GB discsize and empty, no files on it, so that PS can write continously, in order to avoid disc-fragmentation, which would slow down the writing process additionally)
    thats the reason i would add 8GB RAM.
    RAM is very important for photoshop CS5.
    (also processor clock speed and number of cores - in the moment 2 cores are enough
    as CS5 isnt optimized yet for more than 4 or 6 cores. more to come soon, i hope...
    in my case, i added 24GB RAM, cause i want to avoid usage of a scratch disc at all. this is a huge gain of performance for myself.
    i never liked my old scratch disc, as working was terrible slow with it. i also had only 4GB RAM in my last mac, and working was no fun
    i work on large format files, 50-100cm at 300dpi and lots of layers, each layer has to be seen like an image itself, so each layer rises RAM consumption.
    its a pitty that i never saw adobe delivering some kind of performance guide for photoshop, how to improve performance for professional users, like posting that guide on their main photoshop site, i had to digg for these informations for a long period of time and found it in the end at diglloyd´s comprehensive site. i wonder why adobe makes a secret about photoshop performance, a lot of my friends at work mentioned formerly, that they have had huge lag issues and dont know where to begin solving these problems, which are solvable for sure.... (new hardware, proper settings and so on)
    you can read detailed instructions, on how to speed up things in PS here: (where i took it for myself)
    http://macperformanceguide.com/index_topics.html
    subtopic:
    http://macperformanceguide.com/OptimizingPhotoshopCS5-Intro.html
    enjoy your new macbookpro!
    peter

  • Hi, My places.sqlite file size is 30,720 KB have I reached the maximum size, is there even a maximum size for this. Visited links are no longer changing color.

    Hi,
    My places.sqlite file size is 30,720 KB have I reached the maximum size, is there even a maximum size for this.
    Suddenly the visited links are no longer changing font color, as I am preparing for an exam I need visited questions to change color, to keep track of questions that I have finished. But if I delete a few days of history then again,a few more visited links change color then again it stops, so it seems something is getting full and not able to accommodate any more? Why are my visited links no longer changing color after a certain number of visits? I do have a back up of the places.sqlite file. So I have tried everything from deleting the profile, uninstalling reinstalling, creating a new profile, then copy pasting places.sqlite etc, but as mentioned after a few visits, visited links no longer change color, if I delete a few days of history then again a few visits will again change color and then stop again, so what should I increase so that my visited links quota is increased, I have also tried tweaking about:config and it has had no result. Although I was not really confident that increasing brower.history_max _pages (don't remember exact name, but I am sure you get the idea) is going to help.
    Seems as though my visited links change color, quota is full and only if I delete a few days of history will I get a few more visited links to change color. Can somebody shed some light? As mentioned my places.sqlite file size is 30,720 KB so I think perhaps this has something to do with this? Would really appreciate if someone could help. Thank you.

    There is no maximum for the places.sqlite database and other SQLite database files like I wrote above.<br />
    All SQLite database file have fixed minimum sizes and if they run out of space they are automatically increased in size with a specific chunk size. For places.sqlite this is 10 MB for the minimum and for the chunk.
    *Bug 581606 - Avoid sqlite fragmentation via SQLITE_FCNTL_CHUNK_SIZE

  • The sqlite files sends me over quota. Is there any plan to fix this?

    This thread [ https://support.mozilla.org/en-US/questions/786526 ] I found that was closed last year makes it look like you guys are aware of the problem and just don't care about people who have storage limits in the workplace they have to deal with. Still, I have to ask because there are a couple things that Firefox does better that make me wish I could use it at work.

    Yes. 10 MB is the minimum size of the places.sqlite file and if necessary then the file is increased in 10 MB chunks.<br />
    Other sqlite database files have their own minimum size, but places.sqlite is the largest because it stores bookmarks and history.<br />
    This is to avoid fragmentation of the file on the hard drive which would affect performance because of multiple writes to this file.
    Unfortunately there is nothing to do about this
    *Bug 581606 - Avoid sqlite fragmentation via SQLITE_FCNTL_CHUNK_SIZE
    <i>(please do not comment in bug reports: [https://bugzilla.mozilla.org/page.cgi?id=etiquette.html])</i>

  • How do you Control SPDIF input levels in Logic Pro using Apogee Ensemble

    Hello,
    Question. I have an Apogee Ensemble audio interface hooked up to my Macbook Pro. I have a Motif XS8 that I record audio from through SPDIF.
    I am able to get sound through to my audio track once setting the inputs in Logic, however the levels are not even close to halfway. I can not figure out how to fix adjust the SPDIF level to make it hotter b/c I know once I add virtual tracks and vocals through my preamp, the audio from my motif will likely not be hearable.
    Please let me know.
    Also, when I record an audio track is says about 15 minutes remaining to record. Does that mean I have 15 mins of audio for each track or for all my recording period???

    Not owning an Apogee Ensemble, maybe someone else who does has more definitive info can jump in, but do not confuse the lower level of the SPDIF input with "lesser quality". It's a digital connection, with no analog conversion.
    -18 dBFS on a digital scale is the same as 0dB on an analog scale.
    I think what you're experiencing, is the fact that so many people think they need to get the meters in Logic closer to 0dB, which is actually NOT what you want (when staying ITB). This thinking is left over from the days of analogue recording, or the early days of 16 bit digital recording.
    Turn your other sources in Logic down, and you'll reap the benefits of not overloading the 2 buss, allowing the plug-ins to do their computations without distorting, etc... Then you can bring the over all level of your mix back up, in mastering.
    If the Motif is considerably lower in volume than that, you could always insert a gainer plug-in on the audio track.
    As to the 15 minutes, that is what Logic has pre-allocated for recording to your hard drive. It "re-sets" each time you go into record, so it's not saying "you only have 15 minutes of record time available". It's saying, "you have 15 minutes of hard disk space allocated each time you go into record, based on your sample rate and bit depth settings". This can be changed in the Audio pathway, but it's recommended to leave this number as low as possible/necessary, to avoid disk fragmentation.

  • Error when trying to partition for Bootcamp

    Evening everyone,
    I use parallels at the moment but have a piece of software which isn't quite behaving itself. As I have just upgraded to leopard, I thought I would see if setting up Bootcamp and running it in there would solve the problem. I freed up plenty of disk space but when trying to create a 10GB partition in the setup assistant I got this error:
    'The disk cannot be partitioned because some files cannot be moved. Back up the disk and use Disk Utility to format it as a single Mac OS Extended (journaled) volume. Restore your information to the disk and try using Boot Camp Assistant again.'
    I don't know much about the technical side of my computer - partitioning and formatting etc so that message sounds all very scary to me. Is it saying I need to wipe my hard drive and re-format it? It is already formatted in Mac OS Extended (Journaled). If so, can I do that from my Time Machine backups? it sounds like a really drastic action. Also, could this be anything to do with my current Parallels software?
    Thanks very much for your time, any help would be much appreciated!

    when you say "plenty of disk space" how much exactly is that?
    that message indicates that bootcamp assistant couldn't find big enough chunk of contiguous free space to create a bootcamp partition. this happens due to disk fragmentation.
    at the minimum you should boot from the leopard install DVD and repair the hard drive (repair disk, not permissions).
    If you still want to install bootcamp you need to defrag your hard drive. there are several ways to do it. you can use 3rd party software like idefrag. You can clone your disk to an external drive using a 3rd party cloner like CCCloner or Superduper. then boot from the clone and clone it back to the main drive. Or, if you have time machine, you can restore your system from a TM backup. that will also defrag your hard drive. To do that boot from the install DVD and select 'restore system from backup" from the Utilities menu at the top. that seems to be the suggestion the message you saw makes.
    also, keep in mind that you need to keep plenty of free space (>15%) on your OS X partition to avoid disk fragmentation problems.
    Message was edited by: V.K.

  • Exporting .avi from FCP 3 / Opening FCP3 project in FCP 4

    Hello,
    I am currently working on a project in FCP 3. I need to export an .avi-file for use in Adobe Encore on a pc. I have done this before and used the DVpal codec which worked ok, but with this project the image quality is not ok, mostly there seem to be small jumps in a sequence (is on the verge of not running smoothly.. sorry, hard to describe).
    I tried all the other codecs I have (BMP, Cinepak, DVCPRO-PAL, DVCPRO50 and no compressor) and all either gave inacceptable image quality or the film does not run smoothly, or took several hours for rendering a 3 minute film.
    Can anyone recommend a codec that gives decent image quality and does not take hours for rendering (I am working on a G3 900Mhz ibook). Is there maybe an mpeg2 codec I could use so that I could avoid recompressing in Encore?
    I also tried importing the project into FCP 4 on a G4 1.5 Ghz Powerbook but cannot open the file. Is there any way to convert the project to FCP 4? I read about using the XML export for transferring from FCP 4 to 5 or vice versa, but FCP 3 does not have the XML option.
    Any ideas would be greatly appreciated.
    Natalie
    ibook G3, 900 Mhz, 640mb ram   Mac OS X (10.3.9)  

    I don't know better than you, Randy, but I do have strong opinions.
    OS X does tend to avoid file fragmentation, but that can only do so much, depending on how full the volume is and how much file deletion has taken place.
    If the volume is very full, and you can't delete enough data to trim it down, the best thing to do is move/copy all the data to another volume, wipe the 1st volume, then move/copy all the data back. This is usually faster than using a defrag utility.
    For best performance, keep your volumes less than 60% full. But I know guys who regularly run at 90 and 95% full without issue. So, go figure. And I know that conventional wisdom says to try to keep volumes less than 80% full. The thing is, when you benchmark a volume with a utility that measures throughput on the outer, inner, and middle tracks separately, you find that most hard drives will slow down when more than 60% full. NOTE: Not slow down terminally, but slow down by 10 to 20 percent or so.*
    But the real answer to the defrag question is, don't worry about it . . . as Randy said.
    * If you do the math, you'll find that the read/write heads are exposed to about 1/2 of the surface area on the inner tracks as on the outer tracks in a given amount of time. FWIW.

Maybe you are looking for

  • How can I bend an object to come from behind a page?

    I am trying to take the branch below and give it a "wrap around" effect so it looks like it is coming from behind the background. Very much like this: Are there any tools I could use to take the existing image and give it that appearance or would I h

  • Error when @Singleton in Glassfish v3 and weld. CDI not working for EJBs.

    Hello, I am getting the following error when deploying a web app with a SSB with the @Singleton annotation. If I use javax.ejb.Singleton I get the error. If I use @Stateless and @Singleton with javax.inject.Singleton, I don't. I use Glassfish v3 with

  • Error in inner join

    Hi i wrote the following querry but getting error Comma without preceding colon (after SELECT ?). the queerry is like this SELECT ekko~lifnr          lfa1~adrnr          adr6~smtp_addr          into ( l_lifnr, l_adrnr, l_smtp_addr )          from ( (

  • Table containing both SC and PO number

    Hello experts, Can any one among you let me know the table consisiting of both Shopping cart and purchase order number in SRM. I appreciate your quick response, and be rewarded with desired points. regards, IGA.

  • Button Display Symptom

    I have 3 regions on a page and the 2nd region has 4 buttons and an SQL query report which displays data from EMP. There are two page item:   HIDE_SHOW -- which is Show to show the data and Hide to hide the data.   P1_SAL_LIMIT -- which limits the emp