Dataset Viewer
Auto-converted to Parquet
func
string
target
string
cwe
sequence
project
string
commit_id
string
hash
string
size
int64
message
string
vul
int64
static void pppol2tp_session_destruct(struct sock *sk) { struct l2tp_session *session = sk->sk_user_data; skb_queue_purge(&sk->sk_receive_queue); skb_queue_purge(&sk->sk_write_queue); if (session) { sk->sk_user_data = NULL; BUG_ON(session->magic != L2TP_SESSION_MAGIC); l2tp_session_dec_refcount(session); } }
Safe
[ "CWE-416" ]
linux
f026bc29a8e093edfbb2a77700454b285c97e8ad
1.0099361935967028e+38
13
l2tp: pass tunnel pointer to ->session_create() Using l2tp_tunnel_find() in pppol2tp_session_create() and l2tp_eth_create() is racy, because no reference is held on the returned session. These functions are only used to implement the ->session_create callback which is run by l2tp_nl_cmd_session_create(). Therefore searching for the parent tunnel isn't necessary because l2tp_nl_cmd_session_create() already has a pointer to it and holds a reference. This patch modifies ->session_create()'s prototype to directly pass the the parent tunnel as parameter, thus avoiding searching for it in pppol2tp_session_create() and l2tp_eth_create(). Since we have to touch the ->session_create() call in l2tp_nl_cmd_session_create(), let's also remove the useless conditional: we know that ->session_create isn't NULL at this point because it's already been checked earlier in this same function. Finally, one might be tempted to think that the removed l2tp_tunnel_find() calls were harmless because they would return the same tunnel as the one held by l2tp_nl_cmd_session_create() anyway. But that tunnel might be removed and a new one created with same tunnel Id before the l2tp_tunnel_find() call. In this case l2tp_tunnel_find() would return the new tunnel which wouldn't be protected by the reference held by l2tp_nl_cmd_session_create(). Fixes: 309795f4bec2 ("l2tp: Add netlink control API for L2TP") Fixes: d9e31d17ceba ("l2tp: Add L2TP ethernet pseudowire support") Signed-off-by: Guillaume Nault <[email protected]> Signed-off-by: David S. Miller <[email protected]>
0
rb_str_locktmp(VALUE str) { if (FL_TEST(str, STR_TMPLOCK)) { rb_raise(rb_eRuntimeError, "temporal locking already locked string"); } FL_SET(str, STR_TMPLOCK); return str; }
Safe
[ "CWE-119" ]
ruby
1c2ef610358af33f9ded3086aa2d70aac03dcac5
1.4228748120892848e+38
8
* string.c (rb_str_justify): CVE-2009-4124. Fixes a bug reported by Emmanouel Kellinis <Emmanouel.Kellinis AT kpmg.co.uk>, KPMG London; Patch by nobu. git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@26038 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
0
static struct ath_frame_info *get_frame_info(struct sk_buff *skb) { struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb); BUILD_BUG_ON(sizeof(struct ath_frame_info) > sizeof(tx_info->rate_driver_data)); return (struct ath_frame_info *) &tx_info->rate_driver_data[0]; }
Safe
[ "CWE-362", "CWE-241" ]
linux
21f8aaee0c62708654988ce092838aa7df4d25d8
1.3304552950758307e+38
7
ath9k: protect tid->sched check We check tid->sched without a lock taken on ath_tx_aggr_sleep(). That is race condition which can result of doing list_del(&tid->list) twice (second time with poisoned list node) and cause crash like shown below: [424271.637220] BUG: unable to handle kernel paging request at 00100104 [424271.637328] IP: [<f90fc072>] ath_tx_aggr_sleep+0x62/0xe0 [ath9k] ... [424271.639953] Call Trace: [424271.639998] [<f90f6900>] ? ath9k_get_survey+0x110/0x110 [ath9k] [424271.640083] [<f90f6942>] ath9k_sta_notify+0x42/0x50 [ath9k] [424271.640177] [<f809cfef>] sta_ps_start+0x8f/0x1c0 [mac80211] [424271.640258] [<c10f730e>] ? free_compound_page+0x2e/0x40 [424271.640346] [<f809e915>] ieee80211_rx_handlers+0x9d5/0x2340 [mac80211] [424271.640437] [<c112f048>] ? kmem_cache_free+0x1d8/0x1f0 [424271.640510] [<c1345a84>] ? kfree_skbmem+0x34/0x90 [424271.640578] [<c10fc23c>] ? put_page+0x2c/0x40 [424271.640640] [<c1345a84>] ? kfree_skbmem+0x34/0x90 [424271.640706] [<c1345a84>] ? kfree_skbmem+0x34/0x90 [424271.640787] [<f809dde3>] ? ieee80211_rx_handlers_result+0x73/0x1d0 [mac80211] [424271.640897] [<f80a07a0>] ieee80211_prepare_and_rx_handle+0x520/0xad0 [mac80211] [424271.641009] [<f809e22d>] ? ieee80211_rx_handlers+0x2ed/0x2340 [mac80211] [424271.641104] [<c13846ce>] ? ip_output+0x7e/0xd0 [424271.641182] [<f80a1057>] ieee80211_rx+0x307/0x7c0 [mac80211] [424271.641266] [<f90fa6ee>] ath_rx_tasklet+0x88e/0xf70 [ath9k] [424271.641358] [<f80a0f2c>] ? ieee80211_rx+0x1dc/0x7c0 [mac80211] [424271.641445] [<f90f82db>] ath9k_tasklet+0xcb/0x130 [ath9k] Bug report: https://bugzilla.kernel.org/show_bug.cgi?id=70551 Reported-and-tested-by: Max Sydorenko <[email protected]> Cc: [email protected] Signed-off-by: Stanislaw Gruszka <[email protected]> Signed-off-by: John W. Linville <[email protected]>
0
elg_get_nbits (int algo, gcry_mpi_t *pkey) { (void)algo; return mpi_get_nbits (pkey[0]); }
Safe
[ "CWE-200" ]
libgcrypt
35cd81f134c0da4e7e6fcfe40d270ee1251f52c2
1.7630811855821999e+37
6
cipher: Use ciphertext blinding for Elgamal decryption. * cipher/elgamal.c (USE_BLINDING): New. (decrypt): Rewrite to use ciphertext blinding. -- CVE-id: CVE-2014-3591 As a countermeasure to a new side-channel attacks on sliding windows exponentiation we blind the ciphertext for Elgamal decryption. This is similar to what we are doing with RSA. This patch is a backport of the GnuPG 1.4 commit ff53cf06e966dce0daba5f2c84e03ab9db2c3c8b. Unfortunately, the performance impact of Elgamal blinding is quite noticeable (i5-2410M CPU @ 2.30GHz TP 220): Algorithm generate 100*priv 100*public ------------------------------------------------ ELG 1024 bit - 100ms 90ms ELG 2048 bit - 330ms 350ms ELG 3072 bit - 660ms 790ms Algorithm generate 100*priv 100*public ------------------------------------------------ ELG 1024 bit - 150ms 90ms ELG 2048 bit - 520ms 360ms ELG 3072 bit - 1100ms 800ms Signed-off-by: Werner Koch <[email protected]> (cherry picked from commit 410d70bad9a650e3837055e36f157894ae49a57d) Resolved conflicts: cipher/elgamal.c.
0
template<typename t> CImg<T>& operator*=(const t value) { if (is_empty()) return *this; cimg_pragma_openmp(parallel for cimg_openmp_if(size()>=262144)) cimg_rof(*this,ptrd,T) *ptrd = (T)(*ptrd * value); return *this;
Safe
[ "CWE-125" ]
CImg
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
2.2746980629457817e+38
6
Fix other issues in 'CImg<T>::load_bmp()'.
0
static void free_ioctx_reqs(struct percpu_ref *ref) { struct kioctx *ctx = container_of(ref, struct kioctx, reqs); INIT_WORK(&ctx->free_work, free_ioctx); schedule_work(&ctx->free_work); }
Safe
[ "CWE-399" ]
linux
d558023207e008a4476a3b7bb8706b2a2bf5d84f
2.1235437873189174e+38
7
aio: prevent double free in ioctx_alloc ioctx_alloc() calls aio_setup_ring() to allocate a ring. If aio_setup_ring() fails to do so it would call aio_free_ring() before returning, but ioctx_alloc() would call aio_free_ring() again causing a double free of the ring. This is easily reproducible from userspace. Signed-off-by: Sasha Levin <[email protected]> Signed-off-by: Benjamin LaHaise <[email protected]>
0
static int h264_handle_packet_fu_a(AVFormatContext *ctx, PayloadContext *data, AVPacket *pkt, const uint8_t *buf, int len, int *nal_counters, int nal_mask) { uint8_t fu_indicator, fu_header, start_bit, nal_type, nal; if (len < 3) { av_log(ctx, AV_LOG_ERROR, "Too short data for FU-A H.264 RTP packet\n"); return AVERROR_INVALIDDATA; } fu_indicator = buf[0]; fu_header = buf[1]; start_bit = fu_header >> 7; nal_type = fu_header & 0x1f; nal = fu_indicator & 0xe0 | nal_type; // skip the fu_indicator and fu_header buf += 2; len -= 2; if (start_bit && nal_counters) nal_counters[nal_type & nal_mask]++; return ff_h264_handle_frag_packet(pkt, buf, len, start_bit, &nal, 1); }
Safe
[ "CWE-119", "CWE-787" ]
FFmpeg
c42a1388a6d1bfd8001bf6a4241d8ca27e49326d
2.206145139069721e+38
25
avformat/rtpdec_h264: Fix heap-buffer-overflow Fixes: rtp_sdp/poc.sdp Found-by: Bingchang <[email protected]> Signed-off-by: Michael Niedermayer <[email protected]>
0
static int cifs_writepages(struct address_space *mapping, struct writeback_control *wbc) { struct cifs_sb_info *cifs_sb = CIFS_SB(mapping->host->i_sb); bool done = false, scanned = false, range_whole = false; pgoff_t end, index; struct cifs_writedata *wdata; struct TCP_Server_Info *server; struct page *page; int rc = 0; /* * If wsize is smaller than the page cache size, default to writing * one page at a time via cifs_writepage */ if (cifs_sb->wsize < PAGE_CACHE_SIZE) return generic_writepages(mapping, wbc); if (wbc->range_cyclic) { index = mapping->writeback_index; /* Start from prev offset */ end = -1; } else { index = wbc->range_start >> PAGE_CACHE_SHIFT; end = wbc->range_end >> PAGE_CACHE_SHIFT; if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) range_whole = true; scanned = true; } retry: while (!done && index <= end) { unsigned int i, nr_pages, found_pages; pgoff_t next = 0, tofind; struct page **pages; tofind = min((cifs_sb->wsize / PAGE_CACHE_SIZE) - 1, end - index) + 1; wdata = cifs_writedata_alloc((unsigned int)tofind, cifs_writev_complete); if (!wdata) { rc = -ENOMEM; break; } /* * find_get_pages_tag seems to return a max of 256 on each * iteration, so we must call it several times in order to * fill the array or the wsize is effectively limited to * 256 * PAGE_CACHE_SIZE. */ found_pages = 0; pages = wdata->pages; do { nr_pages = find_get_pages_tag(mapping, &index, PAGECACHE_TAG_DIRTY, tofind, pages); found_pages += nr_pages; tofind -= nr_pages; pages += nr_pages; } while (nr_pages && tofind && index <= end); if (found_pages == 0) { kref_put(&wdata->refcount, cifs_writedata_release); break; } nr_pages = 0; for (i = 0; i < found_pages; i++) { page = wdata->pages[i]; /* * At this point we hold neither mapping->tree_lock nor * lock on the page itself: the page may be truncated or * invalidated (changing page->mapping to NULL), or even * swizzled back from swapper_space to tmpfs file * mapping */ if (nr_pages == 0) lock_page(page); else if (!trylock_page(page)) break; if (unlikely(page->mapping != mapping)) { unlock_page(page); break; } if (!wbc->range_cyclic && page->index > end) { done = true; unlock_page(page); break; } if (next && (page->index != next)) { /* Not next consecutive page */ unlock_page(page); break; } if (wbc->sync_mode != WB_SYNC_NONE) wait_on_page_writeback(page); if (PageWriteback(page) || !clear_page_dirty_for_io(page)) { unlock_page(page); break; } /* * This actually clears the dirty bit in the radix tree. * See cifs_writepage() for more commentary. */ set_page_writeback(page); if (page_offset(page) >= i_size_read(mapping->host)) { done = true; unlock_page(page); end_page_writeback(page); break; } wdata->pages[i] = page; next = page->index + 1; ++nr_pages; } /* reset index to refind any pages skipped */ if (nr_pages == 0) index = wdata->pages[0]->index + 1; /* put any pages we aren't going to use */ for (i = nr_pages; i < found_pages; i++) { page_cache_release(wdata->pages[i]); wdata->pages[i] = NULL; } /* nothing to write? */ if (nr_pages == 0) { kref_put(&wdata->refcount, cifs_writedata_release); continue; } wdata->sync_mode = wbc->sync_mode; wdata->nr_pages = nr_pages; wdata->offset = page_offset(wdata->pages[0]); wdata->pagesz = PAGE_CACHE_SIZE; wdata->tailsz = min(i_size_read(mapping->host) - page_offset(wdata->pages[nr_pages - 1]), (loff_t)PAGE_CACHE_SIZE); wdata->bytes = ((nr_pages - 1) * PAGE_CACHE_SIZE) + wdata->tailsz; do { if (wdata->cfile != NULL) cifsFileInfo_put(wdata->cfile); wdata->cfile = find_writable_file(CIFS_I(mapping->host), false); if (!wdata->cfile) { cifs_dbg(VFS, "No writable handles for inode\n"); rc = -EBADF; break; } wdata->pid = wdata->cfile->pid; server = tlink_tcon(wdata->cfile->tlink)->ses->server; rc = server->ops->async_writev(wdata, cifs_writedata_release); } while (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN); for (i = 0; i < nr_pages; ++i) unlock_page(wdata->pages[i]); /* send failure -- clean up the mess */ if (rc != 0) { for (i = 0; i < nr_pages; ++i) { if (rc == -EAGAIN) redirty_page_for_writepage(wbc, wdata->pages[i]); else SetPageError(wdata->pages[i]); end_page_writeback(wdata->pages[i]); page_cache_release(wdata->pages[i]); } if (rc != -EAGAIN) mapping_set_error(mapping, rc); } kref_put(&wdata->refcount, cifs_writedata_release); wbc->nr_to_write -= nr_pages; if (wbc->nr_to_write <= 0) done = true; index = next; } if (!scanned && !done) { /* * We hit the last page and there is more work to be done: wrap * back to the start of the file */ scanned = true; index = 0; goto retry; } if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) mapping->writeback_index = index; return rc; }
Safe
[ "CWE-119", "CWE-787" ]
linux
5d81de8e8667da7135d3a32a964087c0faf5483f
3.830820914994647e+37
210
cifs: ensure that uncached writes handle unmapped areas correctly It's possible for userland to pass down an iovec via writev() that has a bogus user pointer in it. If that happens and we're doing an uncached write, then we can end up getting less bytes than we expect from the call to iov_iter_copy_from_user. This is CVE-2014-0069 cifs_iovec_write isn't set up to handle that situation however. It'll blindly keep chugging through the page array and not filling those pages with anything useful. Worse yet, we'll later end up with a negative number in wdata->tailsz, which will confuse the sending routines and cause an oops at the very least. Fix this by having the copy phase of cifs_iovec_write stop copying data in this situation and send the last write as a short one. At the same time, we want to avoid sending a zero-length write to the server, so break out of the loop and set rc to -EFAULT if that happens. This also allows us to handle the case where no address in the iovec is valid. [Note: Marking this for stable on v3.4+ kernels, but kernels as old as v2.6.38 may have a similar problem and may need similar fix] Cc: <[email protected]> # v3.4+ Reviewed-by: Pavel Shilovsky <[email protected]> Reported-by: Al Viro <[email protected]> Signed-off-by: Jeff Layton <[email protected]> Signed-off-by: Steve French <[email protected]>
0
int kvm_read_guest_atomic(struct kvm *kvm, gpa_t gpa, void *data, unsigned long len) { int r; unsigned long addr; gfn_t gfn = gpa >> PAGE_SHIFT; int offset = offset_in_page(gpa); addr = gfn_to_hva_read(kvm, gfn); if (kvm_is_error_hva(addr)) return -EFAULT; pagefault_disable(); r = kvm_read_hva_atomic(data, (void __user *)addr + offset, len); pagefault_enable(); if (r) return -EFAULT; return 0; }
Safe
[ "CWE-399" ]
linux
12d6e7538e2d418c08f082b1b44ffa5fb7270ed8
2.0216145838942962e+38
18
KVM: perform an invalid memslot step for gpa base change PPC must flush all translations before the new memory slot is visible. Signed-off-by: Marcelo Tosatti <[email protected]> Signed-off-by: Avi Kivity <[email protected]>
0
gtTileContig(TIFFRGBAImage* img, uint32* raster, uint32 w, uint32 h) { TIFF* tif = img->tif; tileContigRoutine put = img->put.contig; uint32 col, row, y, rowstoread; tmsize_t pos; uint32 tw, th; unsigned char* buf; int32 fromskew, toskew; uint32 nrow; int ret = 1, flip; buf = (unsigned char*) _TIFFmalloc(TIFFTileSize(tif)); if (buf == 0) { TIFFErrorExt(tif->tif_clientdata, TIFFFileName(tif), "%s", "No space for tile buffer"); return (0); } _TIFFmemset(buf, 0, TIFFTileSize(tif)); TIFFGetField(tif, TIFFTAG_TILEWIDTH, &tw); TIFFGetField(tif, TIFFTAG_TILELENGTH, &th); flip = setorientation(img); if (flip & FLIP_VERTICALLY) { y = h - 1; toskew = -(int32)(tw + w); } else { y = 0; toskew = -(int32)(tw - w); } for (row = 0; row < h; row += nrow) { rowstoread = th - (row + img->row_offset) % th; nrow = (row + rowstoread > h ? h - row : rowstoread); for (col = 0; col < w; col += tw) { if (TIFFReadTile(tif, buf, col+img->col_offset, row+img->row_offset, 0, 0)==(tmsize_t)(-1) && img->stoponerr) { ret = 0; break; } pos = ((row+img->row_offset) % th) * TIFFTileRowSize(tif); if (col + tw > w) { /* * Tile is clipped horizontally. Calculate * visible portion and skewing factors. */ uint32 npix = w - col; fromskew = tw - npix; (*put)(img, raster+y*w+col, col, y, npix, nrow, fromskew, toskew + fromskew, buf + pos); } else { (*put)(img, raster+y*w+col, col, y, tw, nrow, 0, toskew, buf + pos); } } y += (flip & FLIP_VERTICALLY ? -(int32) nrow : (int32) nrow); } _TIFFfree(buf); if (flip & FLIP_HORIZONTALLY) { uint32 line; for (line = 0; line < h; line++) { uint32 *left = raster + (line * w); uint32 *right = left + w - 1; while ( left < right ) { uint32 temp = *left; *left = *right; *right = temp; left++, right--; } } } return (ret); }
Safe
[ "CWE-119" ]
libtiff
40a5955cbf0df62b1f9e9bd7d9657b0070725d19
2.420751413697241e+38
85
* libtiff/tif_next.c: add new tests to check that we don't read outside of the compressed input stream buffer. * libtiff/tif_getimage.c: in OJPEG case, fix checks on strile width/height
0
smpl_t aubio_onset_get_last_ms (const aubio_onset_t *o) { return aubio_onset_get_last_s (o) * 1000.; }
Safe
[]
aubio
e4e0861cffbc8d3a53dcd18f9ae85797690d67c7
5.57750557099106e+37
4
[onset] safer deletion method
0
word32 btoi(byte b) { return b - 0x30; }
Safe
[ "CWE-254" ]
mysql-server
e7061f7e5a96c66cb2e0bf46bec7f6ff35801a69
3.120088175859254e+38
4
Bug #22738607: YASSL FUNCTION X509_NAME_GET_INDEX_BY_NID IS NOT WORKING AS EXPECTED.
0
struct mnt_namespace *copy_mnt_ns(int flags, struct mnt_namespace *ns, struct fs_struct *new_fs) { struct mnt_namespace *new_ns; BUG_ON(!ns); get_mnt_ns(ns); if (!(flags & CLONE_NEWNS)) return ns; new_ns = dup_mnt_ns(ns, new_fs); put_mnt_ns(ns); return new_ns; }
Safe
[ "CWE-269" ]
linux-2.6
ee6f958291e2a768fd727e7a67badfff0b67711a
2.7765603215189576e+38
16
check privileges before setting mount propagation There's a missing check for CAP_SYS_ADMIN in do_change_type(). Signed-off-by: Miklos Szeredi <[email protected]> Cc: Al Viro <[email protected]> Cc: Christoph Hellwig <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
0
spnego_gss_set_cred_option( OM_uint32 *minor_status, gss_cred_id_t *cred_handle, const gss_OID desired_object, const gss_buffer_t value) { OM_uint32 ret; OM_uint32 tmp_minor_status; spnego_gss_cred_id_t spcred = (spnego_gss_cred_id_t)*cred_handle; gss_cred_id_t mcred; mcred = (spcred == NULL) ? GSS_C_NO_CREDENTIAL : spcred->mcred; ret = gss_set_cred_option(minor_status, &mcred, desired_object, value); if (ret == GSS_S_COMPLETE && spcred == NULL) { /* * If the mechanism allocated a new credential handle, then * we need to wrap it up in an SPNEGO credential handle. */ ret = create_spnego_cred(minor_status, mcred, &spcred); if (ret != GSS_S_COMPLETE) { gss_release_cred(&tmp_minor_status, &mcred); return (ret); } *cred_handle = (gss_cred_id_t)spcred; } if (ret != GSS_S_COMPLETE) return (ret); /* Recognize KRB5_NO_CI_FLAGS_X_OID and avoid asking for integrity. */ if (g_OID_equal(desired_object, no_ci_flags_oid)) spcred->no_ask_integ = 1; return (GSS_S_COMPLETE); }
Safe
[ "CWE-18", "CWE-763" ]
krb5
b51b33f2bc5d1497ddf5bd107f791c101695000d
3.161173783903794e+38
39
Fix SPNEGO context aliasing bugs [CVE-2015-2695] The SPNEGO mechanism currently replaces its context handle with the mechanism context handle upon establishment, under the assumption that most GSS functions are only called after context establishment. This assumption is incorrect, and can lead to aliasing violations for some programs. Maintain the SPNEGO context structure after context establishment and refer to it in all GSS methods. Add initiate and opened flags to the SPNEGO context structure for use in gss_inquire_context() prior to context establishment. CVE-2015-2695: In MIT krb5 1.5 and later, applications which call gss_inquire_context() on a partially-established SPNEGO context can cause the GSS-API library to read from a pointer using the wrong type, generally causing a process crash. This bug may go unnoticed, because the most common SPNEGO authentication scenario establishes the context after just one call to gss_accept_sec_context(). Java server applications using the native JGSS provider are vulnerable to this bug. A carefully crafted SPNEGO packet might allow the gss_inquire_context() call to succeed with attacker-determined results, but applications should not make access control decisions based on gss_inquire_context() results prior to context establishment. CVSSv2 Vector: AV:N/AC:M/Au:N/C:N/I:N/A:C/E:POC/RL:OF/RC:C [[email protected]: several bugfixes, style changes, and edge-case behavior changes; commit message and CVE description] ticket: 8244 target_version: 1.14 tags: pullup
0
session_for_headers (CockpitAuth *self, const gchar *path, GHashTable *in_headers) { gchar *cookie = NULL; gchar *raw = NULL; const char *prefix = "v=2;k="; CockpitSession *ret = NULL; gchar *application; gchar *cookie_name = NULL; g_return_val_if_fail (self != NULL, FALSE); g_return_val_if_fail (in_headers != NULL, FALSE); application = cockpit_auth_parse_application (path, NULL); if (!application) return NULL; cookie_name = application_cookie_name (application); raw = cockpit_web_server_parse_cookie (in_headers, cookie_name); if (raw) { cookie = base64_decode_string (raw); if (cookie != NULL) { if (g_str_has_prefix (cookie, prefix)) ret = g_hash_table_lookup (self->sessions, cookie); else g_debug ("invalid or unsupported cookie: %s", cookie); /* We must never find the default session based on a cookie */ g_assert (!ret || !g_str_equal (ret->cookie, LOCAL_SESSION)); g_assert (!ret || !g_str_equal (ret->name, LOCAL_SESSION)); g_free (cookie); } g_free (raw); } /* Check for a default session for auto-login */ if (!ret) ret = g_hash_table_lookup (self->sessions, LOCAL_SESSION); g_free (application); g_free (cookie_name); return ret; }
Safe
[ "CWE-1021" ]
cockpit
46f6839d1af4e662648a85f3e54bba2d57f39f0e
1.3194374294478085e+38
46
ws: Restrict our cookie to the login host only Mark our cookie as `SameSite: Strict` [1]. The current `None` default will soon be moved to `Lax` by Firefox and Chromium, and recent versions started to throw a warning about it. [1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite https://bugzilla.redhat.com/show_bug.cgi?id=1891944
0
template<typename tc1, typename tc2> CImg<T>& draw_text(const int x0, const int y0, const char *const text, const tc1 *const foreground_color, const tc2 *const background_color, const float opacity=1, const unsigned int font_height=13, ...) { if (!font_height) return *this; CImg<charT> tmp(2048); std::va_list ap; va_start(ap,font_height); cimg_vsnprintf(tmp,tmp._width,text,ap); va_end(ap); const CImgList<ucharT>& font = CImgList<ucharT>::font(font_height,true); _draw_text(x0,y0,tmp,foreground_color,background_color,opacity,font,true); return *this;
Safe
[ "CWE-125" ]
CImg
10af1e8c1ad2a58a0a3342a856bae63e8f257abb
2.022005406789141e+38
12
Fix other issues in 'CImg<T>::load_bmp()'.
0
static inline struct ath6kl_usb *ath6kl_usb_priv(struct ath6kl *ar) { return ar->hif_priv; }
Safe
[ "CWE-476" ]
linux
39d170b3cb62ba98567f5c4f40c27b5864b304e5
3.2721325800457717e+38
4
ath6kl: fix a NULL-ptr-deref bug in ath6kl_usb_alloc_urb_from_pipe() The `ar_usb` field of `ath6kl_usb_pipe_usb_pipe` objects are initialized to point to the containing `ath6kl_usb` object according to endpoint descriptors read from the device side, as shown below in `ath6kl_usb_setup_pipe_resources`: for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { endpoint = &iface_desc->endpoint[i].desc; // get the address from endpoint descriptor pipe_num = ath6kl_usb_get_logical_pipe_num(ar_usb, endpoint->bEndpointAddress, &urbcount); ...... // select the pipe object pipe = &ar_usb->pipes[pipe_num]; // initialize the ar_usb field pipe->ar_usb = ar_usb; } The driver assumes that the addresses reported in endpoint descriptors from device side to be complete. If a device is malicious and does not report complete addresses, it may trigger NULL-ptr-deref `ath6kl_usb_alloc_urb_from_pipe` and `ath6kl_usb_free_urb_to_pipe`. This patch fixes the bug by preventing potential NULL-ptr-deref (CVE-2019-15098). Signed-off-by: Hui Peng <[email protected]> Reported-by: Hui Peng <[email protected]> Reported-by: Mathias Payer <[email protected]> Reviewed-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Kalle Valo <[email protected]>
0
int ttm_dma_tt_init(struct ttm_dma_tt *ttm_dma, struct ttm_buffer_object *bo, uint32_t page_flags) { struct ttm_tt *ttm = &ttm_dma->ttm; ttm_tt_init_fields(ttm, bo, page_flags); INIT_LIST_HEAD(&ttm_dma->pages_list); if (ttm_dma_tt_alloc_page_directory(ttm_dma)) { ttm_tt_destroy(ttm); pr_err("Failed allocating page table\n"); return -ENOMEM; } return 0; }
Vulnerable
[]
linux
5de5b6ecf97a021f29403aa272cb4e03318ef586
2.5740050232571664e+38
15
drm/ttm/nouveau: don't call tt destroy callback on alloc failure. This is confusing, and from my reading of all the drivers only nouveau got this right. Just make the API act under driver control of it's own allocation failing, and don't call destroy, if the page table fails to create there is nothing to cleanup here. (I'm willing to believe I've missed something here, so please review deeply). Reviewed-by: Christian König <[email protected]> Signed-off-by: Dave Airlie <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
1
nbd_unlocked_opt_list (struct nbd_handle *h, nbd_list_callback *list) { struct list_helper s = { .list = *list }; nbd_list_callback l = { .callback = list_visitor, .user_data = &s }; nbd_completion_callback c = { .callback = list_complete, .user_data = &s }; if (nbd_unlocked_aio_opt_list (h, &l, &c) == -1) return -1; SET_CALLBACK_TO_NULL (*list); if (wait_for_option (h) == -1) return -1; if (s.err) { set_error (s.err, "server replied with error to list request"); return -1; } return s.count; }
Safe
[ "CWE-617" ]
libnbd
fb4440de9cc76e9c14bd3ddf3333e78621f40ad0
7.390194667580316e+37
18
opt_go: Tolerate unplanned server death While debugging some experimental nbdkit code that was triggering an assertion failure in nbdkit, I noticed a secondary failure of nbdsh also dying from an assertion: libnbd: debug: nbdsh: nbd_opt_go: transition: NEWSTYLE.OPT_GO.SEND -> DEAD libnbd: debug: nbdsh: nbd_opt_go: option queued, ignoring state machine failure nbdsh: opt.c:86: nbd_unlocked_opt_go: Assertion `nbd_internal_is_state_negotiating (get_next_state (h))' failed. Although my trigger was from non-production nbdkit code, libnbd should never die from an assertion failure merely because a server disappeared at the wrong moment during an incomplete reply to NBD_OPT_GO or NBD_OPT_INFO. If this is assigned a CVE, a followup patch will add mention of it in docs/libnbd-security.pod. Fixes: bbf1c51392 (api: Give aio_opt_go a completion callback)
0
mono_image_get_generic_param_info (MonoReflectionGenericParam *gparam, guint32 owner, MonoDynamicImage *assembly) { GenericParamTableEntry *entry; /* * The GenericParam table must be sorted according to the `owner' field. * We need to do this sorting prior to writing the GenericParamConstraint * table, since we have to use the final GenericParam table indices there * and they must also be sorted. */ entry = g_new0 (GenericParamTableEntry, 1); entry->owner = owner; /* FIXME: track where gen_params should be freed and remove the GC root as well */ MONO_GC_REGISTER_ROOT_IF_MOVING (entry->gparam); entry->gparam = gparam; g_ptr_array_add (assembly->gen_params, entry); }
Safe
[ "CWE-399", "CWE-264" ]
mono
89d1455a80ef13cddee5d79ec00c06055da3085c
3.3038486910060444e+38
19
Don't use finalization to cleanup dynamic methods. * reflection.c: Use a reference queue to cleanup dynamic methods instead of finalization. * runtime.c: Shutdown the dynamic method queue before runtime cleanup begins. * DynamicMethod.cs: No longer finalizable. * icall-def.h: Remove unused dynamic method icall. Fixes #660422
0
thumbnail_cancel (NautilusDirectory *directory) { if (directory->details->thumbnail_state != NULL) { g_cancellable_cancel (directory->details->thumbnail_state->cancellable); directory->details->thumbnail_state->directory = NULL; directory->details->thumbnail_state = NULL; async_job_end (directory, "thumbnail"); } }
Safe
[ "CWE-20" ]
nautilus
1630f53481f445ada0a455e9979236d31a8d3bb0
2.4529566677982575e+38
10
mime-actions: use file metadata for trusting desktop files Currently we only trust desktop files that have the executable bit set, and don't replace the displayed icon or the displayed name until it's trusted, which prevents for running random programs by a malicious desktop file. However, the executable permission is preserved if the desktop file comes from a compressed file. To prevent this, add a metadata::trusted metadata to the file once the user acknowledges the file as trusted. This adds metadata to the file, which cannot be added unless it has access to the computer. Also remove the SHEBANG "trusted" content we were putting inside the desktop file, since that doesn't add more security since it can come with the file itself. https://bugzilla.gnome.org/show_bug.cgi?id=777991
0
bool GetInt(const json &o, int &val) { #ifdef TINYGLTF_USE_RAPIDJSON if (!o.IsDouble()) { if (o.IsInt()) { val = o.GetInt(); return true; } else if (o.IsUint()) { val = static_cast<int>(o.GetUint()); return true; } else if (o.IsInt64()) { val = static_cast<int>(o.GetInt64()); return true; } else if (o.IsUint64()) { val = static_cast<int>(o.GetUint64()); return true; } } return false; #else auto type = o.type(); if ((type == json::value_t::number_integer) || (type == json::value_t::number_unsigned)) { val = static_cast<int>(o.get<int64_t>()); return true; } return false; #endif }
Safe
[ "CWE-20" ]
tinygltf
52ff00a38447f06a17eab1caa2cf0730a119c751
1.9015956248271074e+38
31
Do not expand file path since its not necessary for glTF asset path(URI) and for security reason(`wordexp`).
0
static int rsvp_dump(struct tcf_proto *tp, unsigned long fh, struct sk_buff *skb, struct tcmsg *t) { struct rsvp_filter *f = (struct rsvp_filter*)fh; struct rsvp_session *s; unsigned char *b = skb->tail; struct rtattr *rta; struct tc_rsvp_pinfo pinfo; if (f == NULL) return skb->len; s = f->sess; t->tcm_handle = f->handle; rta = (struct rtattr*)b; RTA_PUT(skb, TCA_OPTIONS, 0, NULL); RTA_PUT(skb, TCA_RSVP_DST, sizeof(s->dst), &s->dst); pinfo.dpi = s->dpi; pinfo.spi = f->spi; pinfo.protocol = s->protocol; pinfo.tunnelid = s->tunnelid; pinfo.tunnelhdr = f->tunnelhdr; RTA_PUT(skb, TCA_RSVP_PINFO, sizeof(pinfo), &pinfo); if (f->res.classid) RTA_PUT(skb, TCA_RSVP_CLASSID, 4, &f->res.classid); if (((f->handle>>8)&0xFF) != 16) RTA_PUT(skb, TCA_RSVP_SRC, sizeof(f->src), f->src); if (tcf_exts_dump(skb, &f->exts, &rsvp_ext_map) < 0) goto rtattr_failure; rta->rta_len = skb->tail - b; if (tcf_exts_dump_stats(skb, &f->exts, &rsvp_ext_map) < 0) goto rtattr_failure; return skb->len; rtattr_failure: skb_trim(skb, b - skb->data); return -1; }
Vulnerable
[ "CWE-200" ]
linux-2.6
8a47077a0b5aa2649751c46e7a27884e6686ccbf
2.5236571539333695e+38
44
[NETLINK]: Missing padding fields in dumped structures Plug holes with padding fields and initialized them to zero. Signed-off-by: Patrick McHardy <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1
static int TSS_rawhmac(unsigned char *digest, const unsigned char *key, unsigned int keylen, ...) { struct sdesc *sdesc; va_list argp; unsigned int dlen; unsigned char *data; int ret; sdesc = init_sdesc(hmacalg); if (IS_ERR(sdesc)) { pr_info("trusted_key: can't alloc %s\n", hmac_alg); return PTR_ERR(sdesc); } ret = crypto_shash_setkey(hmacalg, key, keylen); if (ret < 0) goto out; ret = crypto_shash_init(&sdesc->shash); if (ret < 0) goto out; va_start(argp, keylen); for (;;) { dlen = va_arg(argp, unsigned int); if (dlen == 0) break; data = va_arg(argp, unsigned char *); if (data == NULL) { ret = -EINVAL; break; } ret = crypto_shash_update(&sdesc->shash, data, dlen); if (ret < 0) break; } va_end(argp); if (!ret) ret = crypto_shash_final(&sdesc->shash, digest); out: kzfree(sdesc); return ret; }
Safe
[ "CWE-20" ]
linux
363b02dab09b3226f3bd1420dad9c72b79a42a76
2.5209190464593e+38
43
KEYS: Fix race between updating and finding a negative key Consolidate KEY_FLAG_INSTANTIATED, KEY_FLAG_NEGATIVE and the rejection error into one field such that: (1) The instantiation state can be modified/read atomically. (2) The error can be accessed atomically with the state. (3) The error isn't stored unioned with the payload pointers. This deals with the problem that the state is spread over three different objects (two bits and a separate variable) and reading or updating them atomically isn't practical, given that not only can uninstantiated keys change into instantiated or rejected keys, but rejected keys can also turn into instantiated keys - and someone accessing the key might not be using any locking. The main side effect of this problem is that what was held in the payload may change, depending on the state. For instance, you might observe the key to be in the rejected state. You then read the cached error, but if the key semaphore wasn't locked, the key might've become instantiated between the two reads - and you might now have something in hand that isn't actually an error code. The state is now KEY_IS_UNINSTANTIATED, KEY_IS_POSITIVE or a negative error code if the key is negatively instantiated. The key_is_instantiated() function is replaced with key_is_positive() to avoid confusion as negative keys are also 'instantiated'. Additionally, barriering is included: (1) Order payload-set before state-set during instantiation. (2) Order state-read before payload-read when using the key. Further separate barriering is necessary if RCU is being used to access the payload content after reading the payload pointers. Fixes: 146aa8b1453b ("KEYS: Merge the type-specific data with the payload data") Cc: [email protected] # v4.4+ Reported-by: Eric Biggers <[email protected]> Signed-off-by: David Howells <[email protected]> Reviewed-by: Eric Biggers <[email protected]>
0
static int do_umount(struct vfsmount *mnt, int flags) { struct super_block *sb = mnt->mnt_sb; int retval; LIST_HEAD(umount_list); retval = security_sb_umount(mnt, flags); if (retval) return retval; /* * Allow userspace to request a mountpoint be expired rather than * unmounting unconditionally. Unmount only happens if: * (1) the mark is already set (the mark is cleared by mntput()) * (2) the usage count == 1 [parent vfsmount] + 1 [sys_umount] */ if (flags & MNT_EXPIRE) { if (mnt == current->fs->rootmnt || flags & (MNT_FORCE | MNT_DETACH)) return -EINVAL; if (atomic_read(&mnt->mnt_count) != 2) return -EBUSY; if (!xchg(&mnt->mnt_expiry_mark, 1)) return -EAGAIN; } /* * If we may have to abort operations to get out of this * mount, and they will themselves hold resources we must * allow the fs to do things. In the Unix tradition of * 'Gee thats tricky lets do it in userspace' the umount_begin * might fail to complete on the first run through as other tasks * must return, and the like. Thats for the mount program to worry * about for the moment. */ lock_kernel(); if (sb->s_op->umount_begin) sb->s_op->umount_begin(mnt, flags); unlock_kernel(); /* * No sense to grab the lock for this test, but test itself looks * somewhat bogus. Suggestions for better replacement? * Ho-hum... In principle, we might treat that as umount + switch * to rootfs. GC would eventually take care of the old vfsmount. * Actually it makes sense, especially if rootfs would contain a * /reboot - static binary that would close all descriptors and * call reboot(9). Then init(8) could umount root and exec /reboot. */ if (mnt == current->fs->rootmnt && !(flags & MNT_DETACH)) { /* * Special case for "unmounting" root ... * we just try to remount it readonly. */ down_write(&sb->s_umount); if (!(sb->s_flags & MS_RDONLY)) { lock_kernel(); DQUOT_OFF(sb); retval = do_remount_sb(sb, MS_RDONLY, NULL, 0); unlock_kernel(); } up_write(&sb->s_umount); return retval; } down_write(&namespace_sem); spin_lock(&vfsmount_lock); event++; retval = -EBUSY; if (flags & MNT_DETACH || !propagate_mount_busy(mnt, 2)) { if (!list_empty(&mnt->mnt_list)) umount_tree(mnt, 1, &umount_list); retval = 0; } spin_unlock(&vfsmount_lock); if (retval) security_sb_umount_busy(mnt); up_write(&namespace_sem); release_mounts(&umount_list); return retval; }
Safe
[ "CWE-269" ]
linux-2.6
ee6f958291e2a768fd727e7a67badfff0b67711a
1.533847410263577e+38
85
check privileges before setting mount propagation There's a missing check for CAP_SYS_ADMIN in do_change_type(). Signed-off-by: Miklos Szeredi <[email protected]> Cc: Al Viro <[email protected]> Cc: Christoph Hellwig <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
0
void trace_free_pid_list(struct trace_pid_list *pid_list) { vfree(pid_list->pids); kfree(pid_list); }
Safe
[ "CWE-415" ]
linux
4397f04575c44e1440ec2e49b6302785c95fd2f8
3.763253002993191e+37
5
tracing: Fix possible double free on failure of allocating trace buffer Jing Xia and Chunyan Zhang reported that on failing to allocate part of the tracing buffer, memory is freed, but the pointers that point to them are not initialized back to NULL, and later paths may try to free the freed memory again. Jing and Chunyan fixed one of the locations that does this, but missed a spot. Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Fixes: 737223fbca3b1 ("tracing: Consolidate buffer allocation code") Reported-by: Jing Xia <[email protected]> Reported-by: Chunyan Zhang <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
0
int ADDCALL sass_compiler_parse(struct Sass_Compiler* compiler) { if (compiler == 0) return 1; if (compiler->state == SASS_COMPILER_PARSED) return 0; if (compiler->state != SASS_COMPILER_CREATED) return -1; if (compiler->c_ctx == NULL) return 1; if (compiler->cpp_ctx == NULL) return 1; if (compiler->c_ctx->error_status) return compiler->c_ctx->error_status; // parse the context we have set up (file or data) compiler->root = sass_parse_block(compiler); // success return 0; }
Safe
[ "CWE-125" ]
libsass
8f40dc03e5ab5a8b2ebeb72b31f8d1adbb2fd6ae
3.342391399333774e+38
14
Optimize line_begin/end search in `handle_error` There is no need to advance by UTF-8 code points when searching for an ASCII character, because UTF-8 is a prefix-free encoding.
0
static GF_Err nhmldump_initialize(GF_Filter *filter) { return GF_OK; }
Safe
[ "CWE-787" ]
gpac
ea1eca00fd92fa17f0e25ac25652622924a9a6a0
1.0807331512114095e+37
4
fixed #2138
0
static void vmxnet3_class_init(ObjectClass *class, void *data) { DeviceClass *dc = DEVICE_CLASS(class); PCIDeviceClass *c = PCI_DEVICE_CLASS(class); c->realize = vmxnet3_pci_realize; c->exit = vmxnet3_pci_uninit; c->vendor_id = PCI_VENDOR_ID_VMWARE; c->device_id = PCI_DEVICE_ID_VMWARE_VMXNET3; c->revision = PCI_DEVICE_ID_VMWARE_VMXNET3_REVISION; c->class_id = PCI_CLASS_NETWORK_ETHERNET; c->subsystem_vendor_id = PCI_VENDOR_ID_VMWARE; c->subsystem_id = PCI_DEVICE_ID_VMWARE_VMXNET3; dc->desc = "VMWare Paravirtualized Ethernet v3"; dc->reset = vmxnet3_qdev_reset; dc->vmsd = &vmstate_vmxnet3; dc->props = vmxnet3_properties; set_bit(DEVICE_CATEGORY_NETWORK, dc->categories); }
Safe
[ "CWE-20" ]
qemu
a7278b36fcab9af469563bd7b9dadebe2ae25e48
7.412432040981147e+37
19
net/vmxnet3: Refine l2 header validation Validation of l2 header length assumed minimal packet size as eth_header + 2 * vlan_header regardless of the actual protocol. This caused crash for valid non-IP packets shorter than 22 bytes, as 'tx_pkt->packet_type' hasn't been assigned for such packets, and 'vmxnet3_on_tx_done_update_stats()' expects it to be properly set. Refine header length validation in 'vmxnet_tx_pkt_parse_headers'. Check its return value during packet processing flow. As a side effect, in case IPv4 and IPv6 header validation failure, corrupt packets will be dropped. Signed-off-by: Dana Rubin <[email protected]> Signed-off-by: Shmulik Ladkani <[email protected]> Signed-off-by: Jason Wang <[email protected]>
0
static pj_status_t pjsip_auth_verify( const pjsip_authorization_hdr *hdr, const pj_str_t *method, const pjsip_cred_info *cred_info ) { if (pj_stricmp(&hdr->scheme, &pjsip_DIGEST_STR) == 0) { char digest_buf[PJSIP_MD5STRLEN]; pj_str_t digest; const pjsip_digest_credential *dig = &hdr->credential.digest; /* Check that username and realm match. * These checks should have been performed before entering this * function. */ PJ_ASSERT_RETURN(pj_strcmp(&dig->username, &cred_info->username) == 0, PJ_EINVALIDOP); PJ_ASSERT_RETURN(pj_strcmp(&dig->realm, &cred_info->realm) == 0, PJ_EINVALIDOP); /* Prepare for our digest calculation. */ digest.ptr = digest_buf; digest.slen = PJSIP_MD5STRLEN; /* Create digest for comparison. */ pjsip_auth_create_digest(&digest, &hdr->credential.digest.nonce, &hdr->credential.digest.nc, &hdr->credential.digest.cnonce, &hdr->credential.digest.qop, &hdr->credential.digest.uri, &cred_info->realm, cred_info, method ); /* Compare digest. */ return (pj_stricmp(&digest, &hdr->credential.digest.response) == 0) ? PJ_SUCCESS : PJSIP_EAUTHINVALIDDIGEST; } else { pj_assert(!"Unsupported authentication scheme"); return PJSIP_EINVALIDAUTHSCHEME; } }
Vulnerable
[ "CWE-120", "CWE-787" ]
pjproject
d27f79da11df7bc8bb56c2f291d71e54df8d2c47
1.2480054812213945e+38
42
Use PJ_ASSERT_RETURN() on pjsip_auth_create_digest() and pjsua_init_tpselector() (#3009) * Use PJ_ASSERT_RETURN on pjsip_auth_create_digest * Use PJ_ASSERT_RETURN on pjsua_init_tpselector() * Fix incorrect check. * Add return value to pjsip_auth_create_digest() and pjsip_auth_create_digestSHA256() * Modification based on comments.
1
void readStructBegin() { nestedStructFieldIds_.push_back(lastFieldId_); lastFieldId_ = 0; }
Safe
[ "CWE-400", "CWE-522", "CWE-674" ]
mcrouter
97e033b3bb0cb16b61bf49f0dc7f311a3e0edd1b
1.9786575455846204e+38
4
Attempt to make CarbonProtocolReader::skip tail recursive Reviewed By: edenzik Differential Revision: D17967570 fbshipit-source-id: fdc32e190a521349c7c8f4d6081902fa18eb0284
0
static int commit_match(struct commit *commit, struct rev_info *opt) { int retval; const char *encoding; const char *message; struct strbuf buf = STRBUF_INIT; if (!opt->grep_filter.pattern_list && !opt->grep_filter.header_list) return 1; /* Prepend "fake" headers as needed */ if (opt->grep_filter.use_reflog_filter) { strbuf_addstr(&buf, "reflog "); get_reflog_message(&buf, opt->reflog_info); strbuf_addch(&buf, '\n'); } /* * We grep in the user's output encoding, under the assumption that it * is the encoding they are most likely to write their grep pattern * for. In addition, it means we will match the "notes" encoding below, * so we will not end up with a buffer that has two different encodings * in it. */ encoding = get_log_output_encoding(); message = logmsg_reencode(commit, NULL, encoding); /* Copy the commit to temporary if we are using "fake" headers */ if (buf.len) strbuf_addstr(&buf, message); if (opt->grep_filter.header_list && opt->mailmap) { if (!buf.len) strbuf_addstr(&buf, message); commit_rewrite_person(&buf, "\nauthor ", opt->mailmap); commit_rewrite_person(&buf, "\ncommitter ", opt->mailmap); } /* Append "fake" message parts as needed */ if (opt->show_notes) { if (!buf.len) strbuf_addstr(&buf, message); format_display_notes(commit->object.oid.hash, &buf, encoding, 1); } /* * Find either in the original commit message, or in the temporary. * Note that we cast away the constness of "message" here. It is * const because it may come from the cached commit buffer. That's OK, * because we know that it is modifiable heap memory, and that while * grep_buffer may modify it for speed, it will restore any * changes before returning. */ if (buf.len) retval = grep_buffer(&opt->grep_filter, buf.buf, buf.len); else retval = grep_buffer(&opt->grep_filter, (char *)message, strlen(message)); strbuf_release(&buf); unuse_commit_buffer(commit, message); return opt->invert_grep ? !retval : retval; }
Safe
[]
git
a937b37e766479c8e780b17cce9c4b252fd97e40
2.067519777895439e+38
63
revision: quit pruning diff more quickly when possible When the revision traversal machinery is given a pathspec, we must compute the parent-diff for each commit to determine which ones are TREESAME. We set the QUICK diff flag to avoid looking at more entries than we need; we really just care whether there are any changes at all. But there is one case where we want to know a bit more: if --remove-empty is set, we care about finding cases where the change consists only of added entries (in which case we may prune the parent in try_to_simplify_commit()). To cover that case, our file_add_remove() callback does not quit the diff upon seeing an added entry; it keeps looking for other types of entries. But this means when --remove-empty is not set (and it is not by default), we compute more of the diff than is necessary. You can see this in a pathological case where a commit adds a very large number of entries, and we limit based on a broad pathspec. E.g.: perl -e ' chomp(my $blob = `git hash-object -w --stdin </dev/null`); for my $a (1..1000) { for my $b (1..1000) { print "100644 $blob\t$a/$b\n"; } } ' | git update-index --index-info git commit -qm add git rev-list HEAD -- . This case takes about 100ms now, but after this patch only needs 6ms. That's not a huge improvement, but it's easy to get and it protects us against even more pathological cases (e.g., going from 1 million to 10 million files would take ten times as long with the current code, but not increase at all after this patch). This is reported to minorly speed-up pathspec limiting in real world repositories (like the 100-million-file Windows repository), but probably won't make a noticeable difference outside of pathological setups. This patch actually covers the case without --remove-empty, and the case where we see only deletions. See the in-code comment for details. Note that we have to add a new member to the diff_options struct so that our callback can see the value of revs->remove_empty_trees. This callback parameter could be passed to the "add_remove" and "change" callbacks, but there's not much point. They already receive the diff_options struct, and doing it this way avoids having to update the function signature of the other callbacks (arguably the format_callback and output_prefix functions could benefit from the same simplification). Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
0
has_colors(void) { return NCURSES_SP_NAME(has_colors) (CURRENT_SCREEN); }
Safe
[]
ncurses
790a85dbd4a81d5f5d8dd02a44d84f01512ef443
1.837465203536916e+38
4
ncurses 6.2 - patch 20200531 + correct configure version-check/warnng for g++ to allow for 10.x + re-enable "bel" in konsole-base (report by Nia Huang) + add linux-s entry (patch by Alexandre Montaron). + drop long-obsolete convert_configure.pl + add test/test_parm.c, for checking tparm changes. + improve parameter-checking for tparm, adding function _nc_tiparm() to handle the most-used case, which accepts only numeric parameters (report/testcase by "puppet-meteor"). + use a more conservative estimate of the buffer-size in lib_tparm.c's save_text() and save_number(), in case the sprintf() function passes-through unexpected characters from a format specifier (report/testcase by "puppet-meteor"). + add a check for end-of-string in cvtchar to handle a malformed string in infotocap (report/testcase by "puppet-meteor").
0
com_charset(String *buffer __attribute__((unused)), char *line) { char buff[256], *param; CHARSET_INFO * new_cs; strmake(buff, line, sizeof(buff) - 1); param= get_arg(buff, 0); if (!param || !*param) { return put_info("Usage: \\C charset_name | charset charset_name", INFO_ERROR, 0); } new_cs= get_charset_by_csname(param, MY_CS_PRIMARY, MYF(MY_WME)); if (new_cs) { charset_info= new_cs; mysql_set_character_set(&mysql, charset_info->csname); default_charset= (char *)charset_info->csname; put_info("Charset changed", INFO_INFO); } else put_info("Charset is not found", INFO_INFO); return 0; }
Safe
[ "CWE-295" ]
mysql-server
b3e9211e48a3fb586e88b0270a175d2348935424
2.1399224223264634e+38
22
WL#9072: Backport WL#8785 to 5.5
0
static char *req_uri_field(request_rec *r) { return r->uri; }
Safe
[ "CWE-20" ]
httpd
78eb3b9235515652ed141353d98c239237030410
2.783670790506017e+37
4
*) SECURITY: CVE-2015-0228 (cve.mitre.org) mod_lua: A maliciously crafted websockets PING after a script calls r:wsupgrade() can cause a child process crash. [Edward Lu <Chaosed0 gmail.com>] Discovered by Guido Vranken <guidovranken gmail.com> Submitted by: Edward Lu Committed by: covener git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/trunk@1657261 13f79535-47bb-0310-9956-ffa450edef68
0
static void mntput_no_expire(struct mount *mnt) { put_again: #ifdef CONFIG_SMP br_read_lock(&vfsmount_lock); if (likely(mnt->mnt_ns)) { /* shouldn't be the last one */ mnt_add_count(mnt, -1); br_read_unlock(&vfsmount_lock); return; } br_read_unlock(&vfsmount_lock); br_write_lock(&vfsmount_lock); mnt_add_count(mnt, -1); if (mnt_get_count(mnt)) { br_write_unlock(&vfsmount_lock); return; } #else mnt_add_count(mnt, -1); if (likely(mnt_get_count(mnt))) return; br_write_lock(&vfsmount_lock); #endif if (unlikely(mnt->mnt_pinned)) { mnt_add_count(mnt, mnt->mnt_pinned + 1); mnt->mnt_pinned = 0; br_write_unlock(&vfsmount_lock); acct_auto_close_mnt(&mnt->mnt); goto put_again; } list_del(&mnt->mnt_instance); br_write_unlock(&vfsmount_lock); mntfree(mnt); }
Safe
[ "CWE-284", "CWE-264" ]
linux
3151527ee007b73a0ebd296010f1c0454a919c7d
1.4623264544286542e+38
37
userns: Don't allow creation if the user is chrooted Guarantee that the policy of which files may be access that is established by setting the root directory will not be violated by user namespaces by verifying that the root directory points to the root of the mount namespace at the time of user namespace creation. Changing the root is a privileged operation, and as a matter of policy it serves to limit unprivileged processes to files below the current root directory. For reasons of simplicity and comprehensibility the privilege to change the root directory is gated solely on the CAP_SYS_CHROOT capability in the user namespace. Therefore when creating a user namespace we must ensure that the policy of which files may be access can not be violated by changing the root directory. Anyone who runs a processes in a chroot and would like to use user namespace can setup the same view of filesystems with a mount namespace instead. With this result that this is not a practical limitation for using user namespaces. Cc: [email protected] Acked-by: Serge Hallyn <[email protected]> Reported-by: Andy Lutomirski <[email protected]> Signed-off-by: "Eric W. Biederman" <[email protected]>
0
int npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra) { int valid_sum_count = 0; int i, sum_in_page; for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) { if (sbi->ckpt->alloc_type[i] == SSR) valid_sum_count += sbi->blocks_per_seg; else { if (for_ra) valid_sum_count += le16_to_cpu( F2FS_CKPT(sbi)->cur_data_blkoff[i]); else valid_sum_count += curseg_blkoff(sbi, i); } } sum_in_page = (PAGE_SIZE - 2 * SUM_JOURNAL_SIZE - SUM_FOOTER_SIZE) / SUMMARY_SIZE; if (valid_sum_count <= sum_in_page) return 1; else if ((valid_sum_count - sum_in_page) <= (PAGE_SIZE - SUM_FOOTER_SIZE) / SUMMARY_SIZE) return 2; return 3; }
Safe
[ "CWE-20" ]
linux
638164a2718f337ea224b747cf5977ef143166a4
1.7370843677479686e+38
26
f2fs: fix potential panic during fstrim As Ju Hyung Park reported: "When 'fstrim' is called for manual trim, a BUG() can be triggered randomly with this patch. I'm seeing this issue on both x86 Desktop and arm64 Android phone. On x86 Desktop, this was caused during Ubuntu boot-up. I have a cronjob installed which calls 'fstrim -v /' during boot. On arm64 Android, this was caused during GC looping with 1ms gc_min_sleep_time & gc_max_sleep_time." Root cause of this issue is that f2fs_wait_discard_bios can only be used by f2fs_put_super, because during put_super there must be no other referrers, so it can ignore discard entry's reference count when removing the entry, otherwise in other caller we will hit bug_on in __remove_discard_cmd as there may be other issuer added reference count in discard entry. Thread A Thread B - issue_discard_thread - f2fs_ioc_fitrim - f2fs_trim_fs - f2fs_wait_discard_bios - __issue_discard_cmd - __submit_discard_cmd - __wait_discard_cmd - dc->ref++ - __wait_one_discard_bio - __wait_discard_cmd - __remove_discard_cmd - f2fs_bug_on(sbi, dc->ref) Fixes: 969d1b180d987c2be02de890d0fff0f66a0e80de Reported-by: Ju Hyung Park <[email protected]> Signed-off-by: Chao Yu <[email protected]> Signed-off-by: Jaegeuk Kim <[email protected]>
0
ssize_t smb_vfs_call_lgetxattr(struct vfs_handle_struct *handle, const char *path, const char *name, void *value, size_t size) { VFS_FIND(lgetxattr); return handle->fns->lgetxattr(handle, path, name, value, size); }
Safe
[ "CWE-22" ]
samba
bd269443e311d96ef495a9db47d1b95eb83bb8f4
1.763616394577444e+38
7
Fix bug 7104 - "wide links" and "unix extensions" are incompatible. Change parameter "wide links" to default to "no". Ensure "wide links = no" if "unix extensions = yes" on a share. Fix man pages to refect this. Remove "within share" checks for a UNIX symlink set - even if widelinks = no. The server will not follow that link anyway. Correct DEBUG message in check_reduced_name() to add missing "\n" so it's really clear when a path is being denied as it's outside the enclosing share path. Jeremy.
0
static void pppol2tp_next_session(struct net *net, struct pppol2tp_seq_data *pd) { pd->session = l2tp_session_get_nth(pd->tunnel, pd->session_idx, true); pd->session_idx++; if (pd->session == NULL) { pd->session_idx = 0; pppol2tp_next_tunnel(net, pd); } }
Safe
[ "CWE-416" ]
linux
f026bc29a8e093edfbb2a77700454b285c97e8ad
1.487778568062365e+38
10
l2tp: pass tunnel pointer to ->session_create() Using l2tp_tunnel_find() in pppol2tp_session_create() and l2tp_eth_create() is racy, because no reference is held on the returned session. These functions are only used to implement the ->session_create callback which is run by l2tp_nl_cmd_session_create(). Therefore searching for the parent tunnel isn't necessary because l2tp_nl_cmd_session_create() already has a pointer to it and holds a reference. This patch modifies ->session_create()'s prototype to directly pass the the parent tunnel as parameter, thus avoiding searching for it in pppol2tp_session_create() and l2tp_eth_create(). Since we have to touch the ->session_create() call in l2tp_nl_cmd_session_create(), let's also remove the useless conditional: we know that ->session_create isn't NULL at this point because it's already been checked earlier in this same function. Finally, one might be tempted to think that the removed l2tp_tunnel_find() calls were harmless because they would return the same tunnel as the one held by l2tp_nl_cmd_session_create() anyway. But that tunnel might be removed and a new one created with same tunnel Id before the l2tp_tunnel_find() call. In this case l2tp_tunnel_find() would return the new tunnel which wouldn't be protected by the reference held by l2tp_nl_cmd_session_create(). Fixes: 309795f4bec2 ("l2tp: Add netlink control API for L2TP") Fixes: d9e31d17ceba ("l2tp: Add L2TP ethernet pseudowire support") Signed-off-by: Guillaume Nault <[email protected]> Signed-off-by: David S. Miller <[email protected]>
0
static void dummy_dev_set_color(void *a, Ulong b, Ulong c) { }
Safe
[ "CWE-20" ]
evince
d4139205b010ed06310d14284e63114e88ec6de2
1.1655365913917146e+38
3
backends: Fix several security issues in the dvi-backend. See CVE-2010-2640, CVE-2010-2641, CVE-2010-2642 and CVE-2010-2643.
0
psf_binheader_readf (SF_PRIVATE *psf, char const *format, ...) { va_list argptr ; sf_count_t *countptr, countdata ; unsigned char *ucptr, sixteen_bytes [16] ; unsigned int *intptr, intdata ; unsigned short *shortptr ; char *charptr ; float *floatptr ; double *doubleptr ; char c ; int byte_count = 0, count ; if (! format) return psf_ftell (psf) ; va_start (argptr, format) ; while ((c = *format++)) { switch (c) { case 'e' : /* All conversions are now from LE to host. */ psf->rwf_endian = SF_ENDIAN_LITTLE ; break ; case 'E' : /* All conversions are now from BE to host. */ psf->rwf_endian = SF_ENDIAN_BIG ; break ; case 'm' : /* 4 byte marker value eg 'RIFF' */ intptr = va_arg (argptr, unsigned int*) ; ucptr = (unsigned char*) intptr ; byte_count += header_read (psf, ucptr, sizeof (int)) ; *intptr = GET_MARKER (ucptr) ; break ; case 'h' : intptr = va_arg (argptr, unsigned int*) ; ucptr = (unsigned char*) intptr ; byte_count += header_read (psf, sixteen_bytes, sizeof (sixteen_bytes)) ; { int k ; intdata = 0 ; for (k = 0 ; k < 16 ; k++) intdata ^= sixteen_bytes [k] << k ; } *intptr = intdata ; break ; case '1' : charptr = va_arg (argptr, char*) ; *charptr = 0 ; byte_count += header_read (psf, charptr, sizeof (char)) ; break ; case '2' : /* 2 byte value with the current endian-ness */ shortptr = va_arg (argptr, unsigned short*) ; *shortptr = 0 ; ucptr = (unsigned char*) shortptr ; byte_count += header_read (psf, ucptr, sizeof (short)) ; if (psf->rwf_endian == SF_ENDIAN_BIG) *shortptr = GET_BE_SHORT (ucptr) ; else *shortptr = GET_LE_SHORT (ucptr) ; break ; case '3' : /* 3 byte value with the current endian-ness */ intptr = va_arg (argptr, unsigned int*) ; *intptr = 0 ; byte_count += header_read (psf, sixteen_bytes, 3) ; if (psf->rwf_endian == SF_ENDIAN_BIG) *intptr = GET_BE_3BYTE (sixteen_bytes) ; else *intptr = GET_LE_3BYTE (sixteen_bytes) ; break ; case '4' : /* 4 byte value with the current endian-ness */ intptr = va_arg (argptr, unsigned int*) ; *intptr = 0 ; ucptr = (unsigned char*) intptr ; byte_count += header_read (psf, ucptr, sizeof (int)) ; if (psf->rwf_endian == SF_ENDIAN_BIG) *intptr = psf_get_be32 (ucptr, 0) ; else *intptr = psf_get_le32 (ucptr, 0) ; break ; case '8' : /* 8 byte value with the current endian-ness */ countptr = va_arg (argptr, sf_count_t *) ; *countptr = 0 ; byte_count += header_read (psf, sixteen_bytes, 8) ; if (psf->rwf_endian == SF_ENDIAN_BIG) countdata = psf_get_be64 (sixteen_bytes, 0) ; else countdata = psf_get_le64 (sixteen_bytes, 0) ; *countptr = countdata ; break ; case 'f' : /* Float conversion */ floatptr = va_arg (argptr, float *) ; *floatptr = 0.0 ; byte_count += header_read (psf, floatptr, sizeof (float)) ; if (psf->rwf_endian == SF_ENDIAN_BIG) *floatptr = float32_be_read ((unsigned char*) floatptr) ; else *floatptr = float32_le_read ((unsigned char*) floatptr) ; break ; case 'd' : /* double conversion */ doubleptr = va_arg (argptr, double *) ; *doubleptr = 0.0 ; byte_count += header_read (psf, doubleptr, sizeof (double)) ; if (psf->rwf_endian == SF_ENDIAN_BIG) *doubleptr = double64_be_read ((unsigned char*) doubleptr) ; else *doubleptr = double64_le_read ((unsigned char*) doubleptr) ; break ; case 's' : psf_log_printf (psf, "Format conversion 's' not implemented yet.\n") ; /* strptr = va_arg (argptr, char *) ; size = strlen (strptr) + 1 ; size += (size & 1) ; longdata = H2LE_32 (size) ; get_int (psf, longdata) ; memcpy (&(psf->header [psf->headindex]), strptr, size) ; psf->headindex += size ; */ break ; case 'b' : /* Raw bytes */ charptr = va_arg (argptr, char*) ; count = va_arg (argptr, size_t) ; if (count > 0) byte_count += header_read (psf, charptr, count) ; break ; case 'G' : charptr = va_arg (argptr, char*) ; count = va_arg (argptr, size_t) ; if (count > 0) byte_count += header_gets (psf, charptr, count) ; break ; case 'z' : psf_log_printf (psf, "Format conversion 'z' not implemented yet.\n") ; /* size = va_arg (argptr, size_t) ; while (size) { psf->header [psf->headindex] = 0 ; psf->headindex ++ ; size -- ; } ; */ break ; case 'p' : /* Get the seek position first. */ count = va_arg (argptr, size_t) ; header_seek (psf, count, SEEK_SET) ; byte_count = count ; break ; case 'j' : /* Get the seek position first. */ count = va_arg (argptr, size_t) ; if (count) { header_seek (psf, count, SEEK_CUR) ; byte_count += count ; } ; break ; default : psf_log_printf (psf, "*** Invalid format specifier `%c'\n", c) ; psf->error = SFE_INTERNAL ; break ; } ; } ; va_end (argptr) ; return byte_count ; } /* psf_binheader_readf */
Vulnerable
[ "CWE-119", "CWE-787" ]
libsndfile
708e996c87c5fae77b104ccfeb8f6db784c32074
2.7102684769411588e+38
181
src/ : Move to a variable length header buffer Previously, the `psf->header` buffer was a fixed length specified by `SF_HEADER_LEN` which was set to `12292`. This was problematic for two reasons; this value was un-necessarily large for the majority of files and too small for some others. Now the size of the header buffer starts at 256 bytes and grows as necessary up to a maximum of 100k.
1
static void update_notify_icon_state_order_free(NOTIFY_ICON_STATE_ORDER* notify) { free(notify->toolTip.string); free(notify->infoTip.text.string); free(notify->infoTip.title.string); update_free_window_icon_info(&notify->icon); memset(notify, 0, sizeof(NOTIFY_ICON_STATE_ORDER)); }
Safe
[ "CWE-125" ]
FreeRDP
6b2bc41935e53b0034fe5948aeeab4f32e80f30f
1.7818590813172245e+38
8
Fix #6010: Check length in read_icon_info
0
xfs_bmap_local_to_extents_empty( struct xfs_inode *ip, int whichfork) { struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork); ASSERT(whichfork != XFS_COW_FORK); ASSERT(XFS_IFORK_FORMAT(ip, whichfork) == XFS_DINODE_FMT_LOCAL); ASSERT(ifp->if_bytes == 0); ASSERT(XFS_IFORK_NEXTENTS(ip, whichfork) == 0); xfs_bmap_forkoff_reset(ip, whichfork); ifp->if_flags &= ~XFS_IFINLINE; ifp->if_flags |= XFS_IFEXTENTS; ifp->if_u1.if_root = NULL; ifp->if_height = 0; XFS_IFORK_FMT_SET(ip, whichfork, XFS_DINODE_FMT_EXTENTS); }
Safe
[]
linux
2c4306f719b083d17df2963bc761777576b8ad1b
2.8869803006603536e+38
18
xfs: set format back to extents if xfs_bmap_extents_to_btree If xfs_bmap_extents_to_btree fails in a mode where we call xfs_iroot_realloc(-1) to de-allocate the root, set the format back to extents. Otherwise we can assume we can dereference ifp->if_broot based on the XFS_DINODE_FMT_BTREE format, and crash. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199423 Signed-off-by: Eric Sandeen <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Reviewed-by: Darrick J. Wong <[email protected]> Signed-off-by: Darrick J. Wong <[email protected]>
0
static void red_sasl_auth_free(RedSASLAuth *auth) { g_free(auth->data); g_free(auth->mechname); g_free(auth->mechlist); g_free(auth); }
Safe
[]
spice
95a0cfac8a1c8eff50f05e65df945da3bb501fc9
1.5102522547188236e+38
7
With OpenSSL 1.0.2 and earlier: disable client-side renegotiation. Fixed issue #49 Fixes BZ#1904459 Signed-off-by: Julien Ropé <[email protected]> Reported-by: BlackKD Acked-by: Frediano Ziglio <[email protected]>
0
Magick::ResolutionType Magick::Image::resolutionUnits(void) const { return(static_cast<Magick::ResolutionType>(constImage()->units)); }
Safe
[ "CWE-416" ]
ImageMagick
8c35502217c1879cb8257c617007282eee3fe1cc
1.3532262178370469e+38
4
Added missing return to avoid use after free.
0
static void vmx_cpuid_update(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); if (cpu_has_secondary_exec_ctrls()) { vmx_compute_secondary_exec_control(vmx); vmcs_set_secondary_exec_control(vmx->secondary_exec_control); } if (nested_vmx_allowed(vcpu)) to_vmx(vcpu)->msr_ia32_feature_control_valid_bits |= FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; else to_vmx(vcpu)->msr_ia32_feature_control_valid_bits &= ~FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; if (nested_vmx_allowed(vcpu)) nested_vmx_cr_fixed1_bits_update(vcpu); }
Safe
[ "CWE-284" ]
linux
727ba748e110b4de50d142edca9d6a9b7e6111d8
3.786261932016397e+37
19
kvm: nVMX: Enforce cpl=0 for VMX instructions VMX instructions executed inside a L1 VM will always trigger a VM exit even when executed with cpl 3. This means we must perform the privilege check in software. Fixes: 70f3aac964ae("kvm: nVMX: Remove superfluous VMX instruction fault checks") Cc: [email protected] Signed-off-by: Felix Wilhelm <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
0
GF_Err ainf_Write(GF_Box *s, GF_BitStream *bs) { GF_Err e; GF_AssetInformationBox *ptr = (GF_AssetInformationBox *) s; e = gf_isom_full_box_write(s, bs); if (e) return e; gf_bs_write_u32(bs, ptr->profile_version); gf_bs_write_data(bs, ptr->APID, (u32) strlen(ptr->APID) + 1); return GF_OK; }
Safe
[ "CWE-125" ]
gpac
bceb03fd2be95097a7b409ea59914f332fb6bc86
3.2273975655192726e+38
11
fixed 2 possible heap overflows (inc. #1088)
0
static void sony_remove(struct hid_device *hdev) { struct sony_sc *sc = hid_get_drvdata(hdev); hid_hw_close(hdev); if (sc->quirks & DUALSHOCK4_CONTROLLER_BT) device_remove_file(&sc->hdev->dev, &dev_attr_bt_poll_interval); if (sc->fw_version) device_remove_file(&sc->hdev->dev, &dev_attr_firmware_version); if (sc->hw_version) device_remove_file(&sc->hdev->dev, &dev_attr_hardware_version); sony_cancel_work_sync(sc); sony_remove_dev_list(sc); sony_release_device_id(sc); hid_hw_stop(hdev); }
Safe
[ "CWE-787" ]
linux
d9d4b1e46d9543a82c23f6df03f4ad697dab361b
2.852084786929997e+38
23
HID: Fix assumption that devices have inputs The syzbot fuzzer found a slab-out-of-bounds write bug in the hid-gaff driver. The problem is caused by the driver's assumption that the device must have an input report. While this will be true for all normal HID input devices, a suitably malicious device can violate the assumption. The same assumption is present in over a dozen other HID drivers. This patch fixes them by checking that the list of hid_inputs for the hid_device is nonempty before allowing it to be used. Reported-and-tested-by: [email protected] Signed-off-by: Alan Stern <[email protected]> CC: <[email protected]> Signed-off-by: Benjamin Tissoires <[email protected]>
0
void WebPImage::decodeChunks(long filesize) { DataBuf chunkId(5); byte size_buff[WEBP_TAG_SIZE]; bool has_canvas_data = false; #ifdef EXIV2_DEBUG_MESSAGES std::cout << "Reading metadata" << std::endl; #endif chunkId.pData_[4] = '\0' ; while (!io_->eof() && io_->tell() < filesize) { readOrThrow(*io_, chunkId.pData_, WEBP_TAG_SIZE, Exiv2::kerCorruptedMetadata); readOrThrow(*io_, size_buff, WEBP_TAG_SIZE, Exiv2::kerCorruptedMetadata); const uint32_t size_u32 = Exiv2::getULong(size_buff, littleEndian); // Check that `size_u32` is safe to cast to `long`. enforce(size_u32 <= static_cast<size_t>(std::numeric_limits<unsigned int>::max()), Exiv2::kerCorruptedMetadata); const long size = static_cast<long>(size_u32); // Check that `size` is within bounds. enforce(io_->tell() <= filesize, Exiv2::kerCorruptedMetadata); enforce(size <= (filesize - io_->tell()), Exiv2::kerCorruptedMetadata); DataBuf payload(size); if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8X) && !has_canvas_data) { enforce(size >= 10, Exiv2::kerCorruptedMetadata); has_canvas_data = true; byte size_buf[WEBP_TAG_SIZE]; readOrThrow(*io_, payload.pData_, payload.size_, Exiv2::kerCorruptedMetadata); // Fetch width memcpy(&size_buf, &payload.pData_[4], 3); size_buf[3] = 0; pixelWidth_ = Exiv2::getULong(size_buf, littleEndian) + 1; // Fetch height memcpy(&size_buf, &payload.pData_[7], 3); size_buf[3] = 0; pixelHeight_ = Exiv2::getULong(size_buf, littleEndian) + 1; } else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8) && !has_canvas_data) { enforce(size >= 10, Exiv2::kerCorruptedMetadata); has_canvas_data = true; readOrThrow(*io_, payload.pData_, payload.size_, Exiv2::kerCorruptedMetadata); byte size_buf[WEBP_TAG_SIZE]; // Fetch width"" memcpy(&size_buf, &payload.pData_[6], 2); size_buf[2] = 0; size_buf[3] = 0; pixelWidth_ = Exiv2::getULong(size_buf, littleEndian) & 0x3fff; // Fetch height memcpy(&size_buf, &payload.pData_[8], 2); size_buf[2] = 0; size_buf[3] = 0; pixelHeight_ = Exiv2::getULong(size_buf, littleEndian) & 0x3fff; } else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_VP8L) && !has_canvas_data) { enforce(size >= 5, Exiv2::kerCorruptedMetadata); has_canvas_data = true; byte size_buf_w[2]; byte size_buf_h[3]; readOrThrow(*io_, payload.pData_, payload.size_, Exiv2::kerCorruptedMetadata); // Fetch width memcpy(&size_buf_w, &payload.pData_[1], 2); size_buf_w[1] &= 0x3F; pixelWidth_ = Exiv2::getUShort(size_buf_w, littleEndian) + 1; // Fetch height memcpy(&size_buf_h, &payload.pData_[2], 3); size_buf_h[0] = ((size_buf_h[0] >> 6) & 0x3) | ((size_buf_h[1] & 0x3F) << 0x2); size_buf_h[1] = ((size_buf_h[1] >> 6) & 0x3) | ((size_buf_h[2] & 0xF) << 0x2); pixelHeight_ = Exiv2::getUShort(size_buf_h, littleEndian) + 1; } else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_ANMF) && !has_canvas_data) { enforce(size >= 12, Exiv2::kerCorruptedMetadata); has_canvas_data = true; byte size_buf[WEBP_TAG_SIZE]; readOrThrow(*io_, payload.pData_, payload.size_, Exiv2::kerCorruptedMetadata); // Fetch width memcpy(&size_buf, &payload.pData_[6], 3); size_buf[3] = 0; pixelWidth_ = Exiv2::getULong(size_buf, littleEndian) + 1; // Fetch height memcpy(&size_buf, &payload.pData_[9], 3); size_buf[3] = 0; pixelHeight_ = Exiv2::getULong(size_buf, littleEndian) + 1; } else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_ICCP)) { readOrThrow(*io_, payload.pData_, payload.size_, Exiv2::kerCorruptedMetadata); this->setIccProfile(payload); } else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_EXIF)) { readOrThrow(*io_, payload.pData_, payload.size_, Exiv2::kerCorruptedMetadata); byte size_buff2[2]; // 4 meaningful bytes + 2 padding bytes byte exifLongHeader[] = { 0xFF, 0x01, 0xFF, 0xE1, 0x00, 0x00 }; byte exifShortHeader[] = { 0x45, 0x78, 0x69, 0x66, 0x00, 0x00 }; byte exifTiffLEHeader[] = { 0x49, 0x49, 0x2A }; // "MM*" byte exifTiffBEHeader[] = { 0x4D, 0x4D, 0x00, 0x2A }; // "II\0*" byte* rawExifData = NULL; long offset = 0; bool s_header = false; bool le_header = false; bool be_header = false; long pos = getHeaderOffset (payload.pData_, payload.size_, (byte*)&exifLongHeader, 4); if (pos == -1) { pos = getHeaderOffset (payload.pData_, payload.size_, (byte*)&exifLongHeader, 6); if (pos != -1) { s_header = true; } } if (pos == -1) { pos = getHeaderOffset (payload.pData_, payload.size_, (byte*)&exifTiffLEHeader, 3); if (pos != -1) { le_header = true; } } if (pos == -1) { pos = getHeaderOffset (payload.pData_, payload.size_, (byte*)&exifTiffBEHeader, 4); if (pos != -1) { be_header = true; } } if (s_header) { offset += 6; } if (be_header || le_header) { offset += 12; } const long sizePayload = payload.size_ + offset; rawExifData = (byte*)malloc(sizePayload); if (s_header) { us2Data(size_buff2, (uint16_t) (sizePayload - 6), bigEndian); memcpy(rawExifData, (char*)&exifLongHeader, 4); memcpy(rawExifData + 4, (char*)&size_buff2, 2); } if (be_header || le_header) { us2Data(size_buff2, (uint16_t) (sizePayload - 6), bigEndian); memcpy(rawExifData, (char*)&exifLongHeader, 4); memcpy(rawExifData + 4, (char*)&size_buff2, 2); memcpy(rawExifData + 6, (char*)&exifShortHeader, 6); } memcpy(rawExifData + offset, payload.pData_, payload.size_); #ifdef EXIV2_DEBUG_MESSAGES std::cout << "Display Hex Dump [size:" << (unsigned long)sizePayload << "]" << std::endl; std::cout << Internal::binaryToHex(rawExifData, sizePayload); #endif if (pos != -1) { XmpData xmpData; ByteOrder bo = ExifParser::decode(exifData_, payload.pData_ + pos, payload.size_ - pos); setByteOrder(bo); } else { #ifndef SUPPRESS_WARNINGS EXV_WARNING << "Failed to decode Exif metadata." << std::endl; #endif exifData_.clear(); } if (rawExifData) free(rawExifData); } else if (equalsWebPTag(chunkId, WEBP_CHUNK_HEADER_XMP)) { readOrThrow(*io_, payload.pData_, payload.size_, Exiv2::kerCorruptedMetadata); xmpPacket_.assign(reinterpret_cast<char*>(payload.pData_), payload.size_); if (xmpPacket_.size() > 0 && XmpParser::decode(xmpData_, xmpPacket_)) { #ifndef SUPPRESS_WARNINGS EXV_WARNING << "Failed to decode XMP metadata." << std::endl; #endif } else { #ifdef EXIV2_DEBUG_MESSAGES std::cout << "Display Hex Dump [size:" << (unsigned long)payload.size_ << "]" << std::endl; std::cout << Internal::binaryToHex(payload.pData_, payload.size_); #endif } } else { io_->seek(size, BasicIo::cur); } if ( io_->tell() % 2 ) io_->seek(+1, BasicIo::cur); } }
Safe
[ "CWE-703", "CWE-125" ]
exiv2
783b3a6ff15ed6f82a8f8e6c8a6f3b84a9b04d4b
1.967705682385632e+38
203
Improve bound checking in WebPImage::doWriteMetadata()
0
int Field_float::store(const char *from,size_t len,CHARSET_INFO *cs) { int error; Field_float::store(get_double(from, len, cs, &error)); return error; }
Safe
[ "CWE-416", "CWE-703" ]
server
08c7ab404f69d9c4ca6ca7a9cf7eec74c804f917
2.5846060292419074e+37
6
MDEV-24176 Server crashes after insert in the table with virtual column generated using date_format() and if() vcol_info->expr is allocated on expr_arena at parsing stage. Since expr item is allocated on expr_arena all its containee items must be allocated on expr_arena too. Otherwise fix_session_expr() will encounter prematurely freed item. When table is reopened from cache vcol_info contains stale expression. We refresh expression via TABLE::vcol_fix_exprs() but first we must prepare a proper context (Vcol_expr_context) which meets some requirements: 1. As noted above expr update must be done on expr_arena as there may be new items created. It was a bug in fix_session_expr_for_read() and was just not reproduced because of no second refix. Now refix is done for more cases so it does reproduce. Tests affected: vcol.binlog 2. Also name resolution context must be narrowed to the single table. Tested by: vcol.update main.default vcol.vcol_syntax gcol.gcol_bugfixes 3. sql_mode must be clean and not fail expr update. sql_mode such as MODE_NO_BACKSLASH_ESCAPES, MODE_NO_ZERO_IN_DATE, etc must not affect vcol expression update. If the table was created successfully any further evaluation must not fail. Tests affected: main.func_like Reviewed by: Sergei Golubchik <[email protected]>
0
inbound_privmsg (server *serv, char *from, char *ip, char *text, int id, const message_tags_data *tags_data) { session *sess; struct User *user; char idtext[64]; gboolean nodiag = FALSE; sess = find_dialog (serv, from); if (sess || prefs.hex_gui_autoopen_dialog) { /*0=ctcp 1=priv will set hex_gui_autoopen_dialog=0 here is flud detected */ if (!sess) { if (flood_check (from, ip, serv, current_sess, 1)) /* Create a dialog session */ sess = inbound_open_dialog (serv, from, tags_data); else sess = serv->server_session; if (!sess) return; /* ?? */ } if (ip && ip[0]) set_topic (sess, ip, ip); inbound_chanmsg (serv, NULL, NULL, from, text, FALSE, id, tags_data); return; } sess = find_session_from_nick (from, serv); if (!sess) { sess = serv->front_session; nodiag = TRUE; /* We don't want it to look like a normal message in front sess */ } user = userlist_find (sess, from); if (user) { user->lasttalk = time (0); if (user->account) id = TRUE; } inbound_make_idtext (serv, idtext, sizeof (idtext), id); if (sess->type == SESS_DIALOG && !nodiag) EMIT_SIGNAL_TIMESTAMP (XP_TE_DPRIVMSG, sess, from, text, idtext, NULL, 0, tags_data->timestamp); else EMIT_SIGNAL_TIMESTAMP (XP_TE_PRIVMSG, sess, from, text, idtext, NULL, 0, tags_data->timestamp); }
Safe
[ "CWE-22" ]
hexchat
4e061a43b3453a9856d34250c3913175c45afe9d
1.4677246132688375e+38
54
Clean up handling CAP LS
0
ci_nregs(mrb_callinfo *ci) { struct RProc *p = ci->proc; int n = 0; if (!p) { if (ci->argc < 0) return 3; return ci->argc+2; } if (!MRB_PROC_CFUNC_P(p) && p->body.irep) { n = p->body.irep->nregs; } if (ci->argc < 0) { if (n < 3) n = 3; /* self + args + blk */ } if (ci->argc > n) { n = ci->argc + 2; /* self + blk */ } return n; }
Safe
[ "CWE-415" ]
mruby
97319697c8f9f6ff27b32589947e1918e3015503
1.2860526780074818e+38
20
Cancel 9cdf439 Should not free the pointer in `realloc` since it can cause use-after-free problem.
0
smtp_report_tx_data(struct smtp_session *s, uint32_t msgid, int ok) { if (! SESSION_FILTERED(s)) return; report_smtp_tx_data("smtp-in", s->id, msgid, ok); }
Safe
[ "CWE-78", "CWE-252" ]
src
9dcfda045474d8903224d175907bfc29761dcb45
5.20002600624362e+37
7
Fix a security vulnerability discovered by Qualys which can lead to a privileges escalation on mbox deliveries and unprivileged code execution on lmtp deliveries, due to a logic issue causing a sanity check to be missed. ok eric@, millert@
0
NCURSES_SP_NAME(_nc_mvcur) (NCURSES_SP_DCLx int yold, int xold, int ynew, int xnew) { int rc; rc = _nc_real_mvcur(NCURSES_SP_ARGx yold, xold, ynew, xnew, NCURSES_SP_NAME(_nc_outch), TRUE); /* * With the terminal-driver, we cannot distinguish between internal and * external calls. Flush the output if the screen has not been * initialized, e.g., when used from low-level terminfo programs. */ if ((SP_PARM != 0) && (SP_PARM->_endwin == ewInitial)) NCURSES_SP_NAME(_nc_flush) (NCURSES_SP_ARG); return rc; }
Safe
[]
ncurses
790a85dbd4a81d5f5d8dd02a44d84f01512ef443
2.636191352919312e+38
17
ncurses 6.2 - patch 20200531 + correct configure version-check/warnng for g++ to allow for 10.x + re-enable "bel" in konsole-base (report by Nia Huang) + add linux-s entry (patch by Alexandre Montaron). + drop long-obsolete convert_configure.pl + add test/test_parm.c, for checking tparm changes. + improve parameter-checking for tparm, adding function _nc_tiparm() to handle the most-used case, which accepts only numeric parameters (report/testcase by "puppet-meteor"). + use a more conservative estimate of the buffer-size in lib_tparm.c's save_text() and save_number(), in case the sprintf() function passes-through unexpected characters from a format specifier (report/testcase by "puppet-meteor"). + add a check for end-of-string in cvtchar to handle a malformed string in infotocap (report/testcase by "puppet-meteor").
0
ar6000_sysfs_bmi_write(struct file *fp, struct kobject *kobj, struct bin_attribute *bin_attr, char *buf, loff_t pos, size_t count) { int index; struct ar6_softc *ar; struct hif_device_os_device_info *osDevInfo; AR_DEBUG_PRINTF(ATH_DEBUG_INFO,("BMI: Write %d bytes\n", (u32)count)); for (index=0; index < MAX_AR6000; index++) { ar = (struct ar6_softc *)ar6k_priv(ar6000_devices[index]); osDevInfo = &ar->osDevInfo; if (kobj == (&(((struct device *)osDevInfo->pOSDevice)->kobj))) { break; } } if (index == MAX_AR6000) return 0; if ((BMIRawWrite(ar->arHifDevice, (u8*)buf, count)) != 0) { return 0; } return count; }
Safe
[ "CWE-703", "CWE-264" ]
linux
550fd08c2cebad61c548def135f67aba284c6162
2.057774046767054e+38
25
net: Audit drivers to identify those needing IFF_TX_SKB_SHARING cleared After the last patch, We are left in a state in which only drivers calling ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real hardware call ether_setup for their net_devices and don't hold any state in their skbs. There are a handful of drivers that violate this assumption of course, and need to be fixed up. This patch identifies those drivers, and marks them as not being able to support the safe transmission of skbs by clearning the IFF_TX_SKB_SHARING flag in priv_flags Signed-off-by: Neil Horman <[email protected]> CC: Karsten Keil <[email protected]> CC: "David S. Miller" <[email protected]> CC: Jay Vosburgh <[email protected]> CC: Andy Gospodarek <[email protected]> CC: Patrick McHardy <[email protected]> CC: Krzysztof Halasa <[email protected]> CC: "John W. Linville" <[email protected]> CC: Greg Kroah-Hartman <[email protected]> CC: Marcel Holtmann <[email protected]> CC: Johannes Berg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
0
quantify_node(Node **np, int lower, int upper) { Node* tmp = node_new_quantifier(lower, upper, 0); if (IS_NULL(tmp)) return ONIGERR_MEMORY; NQTFR(tmp)->target = *np; *np = tmp; return 0; }
Safe
[ "CWE-476" ]
Onigmo
00cc7e28a3ed54b3b512ef3b58ea737a57acf1f9
1.1496780014603868e+38
8
Fix SEGV in onig_error_code_to_str() (Fix #132) When onig_new(ONIG_SYNTAX_PERL) fails with ONIGERR_INVALID_GROUP_NAME, onig_error_code_to_str() crashes. onig_scan_env_set_error_string() should have been used when returning ONIGERR_INVALID_GROUP_NAME.
0
static MZ_FORCEINLINE mz_bool mz_zip_reader_string_equal(const char *pA, const char *pB, mz_uint len, mz_uint flags) { mz_uint i; if (flags & MZ_ZIP_FLAG_CASE_SENSITIVE) return 0 == memcmp(pA, pB, len); for (i = 0; i < len; ++i) if (MZ_TOLOWER(pA[i]) != MZ_TOLOWER(pB[i])) return MZ_FALSE; return MZ_TRUE; }
Safe
[ "CWE-20", "CWE-190" ]
tinyexr
a685e3332f61cd4e59324bf3f669d36973d64270
3.6780860150188966e+37
10
Make line_no with too large value(2**20) invalid. Fixes #124
0
stringprep_ucs4_nfkc_normalize (const uint32_t * str, ssize_t len) { char *p; uint32_t *result_wc; p = stringprep_ucs4_to_utf8 (str, len, 0, 0); result_wc = _g_utf8_normalize_wc (p, -1, G_NORMALIZE_NFKC); free (p); return result_wc; }
Safe
[]
libidn
2e97c2796581c27213962c77f5a8571a598f9a2e
2.1096583596746e+38
11
libidn: stringprep_utf8_to_ucs4 now rejects invalid UTF-8. CVE-2015-2059
0
static int nfs_readdir_page_filler(struct nfs_readdir_descriptor *desc, struct nfs_entry *entry, struct page **xdr_pages, unsigned int buflen, struct page **arrays, size_t narrays) { struct address_space *mapping = desc->file->f_mapping; struct xdr_stream stream; struct xdr_buf buf; struct page *scratch, *new, *page = *arrays; int status; scratch = alloc_page(GFP_KERNEL); if (scratch == NULL) return -ENOMEM; xdr_init_decode_pages(&stream, &buf, xdr_pages, buflen); xdr_set_scratch_page(&stream, scratch); do { if (entry->fattr->label) entry->fattr->label->len = NFS4_MAXLABELLEN; status = xdr_decode(desc, entry, &stream); if (status != 0) break; if (desc->plus) nfs_prime_dcache(file_dentry(desc->file), entry, desc->dir_verifier); status = nfs_readdir_add_to_array(entry, page); if (status != -ENOSPC) continue; if (page->mapping != mapping) { if (!--narrays) break; new = nfs_readdir_page_array_alloc(entry->prev_cookie, GFP_KERNEL); if (!new) break; arrays++; *arrays = page = new; } else { new = nfs_readdir_page_get_next(mapping, page->index + 1, entry->prev_cookie); if (!new) break; if (page != *arrays) nfs_readdir_page_unlock_and_put(page); page = new; } status = nfs_readdir_add_to_array(entry, page); } while (!status && !entry->eof); switch (status) { case -EBADCOOKIE: if (entry->eof) { nfs_readdir_page_set_eof(page); status = 0; } break; case -ENOSPC: case -EAGAIN: status = 0; break; } if (page != *arrays) nfs_readdir_page_unlock_and_put(page); put_page(scratch); return status; }
Safe
[ "CWE-909" ]
linux
ac795161c93699d600db16c1a8cc23a65a1eceaf
1.6529441547589349e+38
77
NFSv4: Handle case where the lookup of a directory fails If the application sets the O_DIRECTORY flag, and tries to open a regular file, nfs_atomic_open() will punt to doing a regular lookup. If the server then returns a regular file, we will happily return a file descriptor with uninitialised open state. The fix is to return the expected ENOTDIR error in these cases. Reported-by: Lyu Tao <[email protected]> Fixes: 0dd2b474d0b6 ("nfs: implement i_op->atomic_open()") Signed-off-by: Trond Myklebust <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
0
mp3_type_find (GstTypeFind * tf, gpointer unused) { GstTypeFindProbability prob, mid_prob; const guint8 *data; guint layer, mid_layer; guint64 length; mp3_type_find_at_offset (tf, 0, &layer, &prob); length = gst_type_find_get_length (tf); if (length == 0 || length == (guint64) - 1) { if (prob != 0) goto suggest; return; } /* if we're pretty certain already, skip the additional check */ if (prob >= GST_TYPE_FIND_LIKELY) goto suggest; mp3_type_find_at_offset (tf, length / 2, &mid_layer, &mid_prob); if (mid_prob > 0) { if (prob == 0) { GST_LOG ("detected audio/mpeg only in the middle (p=%u)", mid_prob); layer = mid_layer; prob = mid_prob; goto suggest; } if (layer != mid_layer) { GST_WARNING ("audio/mpeg layer discrepancy: %u vs. %u", layer, mid_layer); return; /* FIXME: or should we just go with the one in the middle? */ } /* detected mpeg audio both in middle of the file and at the start */ prob = (prob + mid_prob) / 2; goto suggest; } /* a valid header right at the start makes it more likely * that this is actually plain mpeg-1 audio */ if (prob > 0) { data = gst_type_find_peek (tf, 0, 4); /* use min. frame size? */ if (data && mp3_type_frame_length_from_header (GST_READ_UINT32_BE (data), &layer, NULL, NULL, NULL, NULL, 0) != 0) { prob = MIN (prob + 10, GST_TYPE_FIND_MAXIMUM); } } if (prob > 0) goto suggest; return; suggest: { g_return_if_fail (layer >= 1 && layer <= 3); gst_type_find_suggest_simple (tf, prob, "audio/mpeg", "mpegversion", G_TYPE_INT, 1, "layer", G_TYPE_INT, layer, "parsed", G_TYPE_BOOLEAN, FALSE, NULL); } }
Safe
[ "CWE-125" ]
gst-plugins-base
2fdccfd64fc609e44e9c4b8eed5bfdc0ab9c9095
1.714991544394752e+38
64
typefind: bounds check windows ico detection Fixes out of bounds read https://bugzilla.gnome.org/show_bug.cgi?id=774902
0
void st_select_lex_node::fast_exclude() { if (link_prev) { if ((*link_prev= link_next)) link_next->link_prev= link_prev; } // Remove slave structure for (; slave; slave= slave->next) slave->fast_exclude(); }
Safe
[ "CWE-476" ]
server
3a52569499e2f0c4d1f25db1e81617a9d9755400
2.816764787981623e+38
12
MDEV-25636: Bug report: abortion in sql/sql_parse.cc:6294 The asserion failure was caused by this query select /*id=1*/ from t1 where col= ( select /*id=2*/ from ... where corr_cond1 union select /*id=4*/ from ... where corr_cond2) Here, - select with id=2 was correlated due to corr_cond1. - select with id=4 was initially correlated due to corr_cond2, but then the optimizer optimized away the correlation, making the select with id=4 uncorrelated. However, since select with id=2 remained correlated, the execution had to re-compute the whole UNION. When it tried to execute select with id=4, it hit an assertion (join buffer already free'd). This is because select with id=4 has freed its execution structures after it has been executed once. The select is uncorrelated, so it did not expect it would need to be executed for the second time. Fixed this by adding this logic in st_select_lex::optimize_unflattened_subqueries(): If a member of a UNION is correlated, mark all its members as correlated, so that they are prepared to be executed multiple times.
0
Client::haveParsedReplyHeaders() { Must(theFinalReply); maybePurgeOthers(); // adaptation may overwrite old offset computed using the virgin response currentOffset = 0; if (const auto cr = theFinalReply->contentRange()) { if (cr->spec.offset != HttpHdrRangeSpec::UnknownPosition) currentOffset = cr->spec.offset; } }
Safe
[ "CWE-20" ]
squid
6c9c44d0e9cf7b72bb233360c5308aa063af3d69
2.0939240092522142e+38
12
Handle more partial responses (#791)
0
gnome_desktop_thumbnail_factory_has_valid_failed_thumbnail (GnomeDesktopThumbnailFactory *factory, const char *uri, time_t mtime) { char *path; g_return_val_if_fail (uri != NULL, FALSE); path = lookup_failed_thumbnail_path (uri, mtime, factory->priv->size); if (path == NULL) return FALSE; g_free (path); return TRUE; }
Safe
[]
nautilus
2ddba428ef2b13d0620bd599c3635b9c11044659
3.3510991791356203e+38
16
Update gnome-desktop code Closes https://gitlab.gnome.org/GNOME/nautilus/issues/987
0
void Curl_disconnect(struct Curl_easy *data, struct connectdata *conn, bool dead_connection) { /* there must be a connection to close */ DEBUGASSERT(conn); /* it must be removed from the connection cache */ DEBUGASSERT(!conn->bundle); /* there must be an associated transfer */ DEBUGASSERT(data); /* the transfer must be detached from the connection */ DEBUGASSERT(!data->conn); /* * If this connection isn't marked to force-close, leave it open if there * are other users of it */ if(CONN_INUSE(conn) && !dead_connection) { DEBUGF(infof(data, "Curl_disconnect when inuse: %zu", CONN_INUSE(conn))); return; } if(conn->dns_entry) { Curl_resolv_unlock(data, conn->dns_entry); conn->dns_entry = NULL; } /* Cleanup NTLM connection-related data */ Curl_http_auth_cleanup_ntlm(conn); /* Cleanup NEGOTIATE connection-related data */ Curl_http_auth_cleanup_negotiate(conn); if(conn->bits.connect_only) /* treat the connection as dead in CONNECT_ONLY situations */ dead_connection = TRUE; /* temporarily attach the connection to this transfer handle for the disconnect and shutdown */ Curl_attach_connnection(data, conn); if(conn->handler->disconnect) /* This is set if protocol-specific cleanups should be made */ conn->handler->disconnect(data, conn, dead_connection); conn_shutdown(data, conn); /* detach it again */ Curl_detach_connnection(data); conn_free(conn); }
Safe
[]
curl
852aa5ad351ea53e5f01d2f44b5b4370c2bf5425
1.4751710480146073e+37
54
url: check sasl additional parameters for connection reuse. Also move static function safecmp() as non-static Curl_safecmp() since its purpose is needed at several places. Bug: https://curl.se/docs/CVE-2022-22576.html CVE-2022-22576 Closes #8746
0
GlobalObjectProperty *LReference::castAsGlobalObjectProperty() const { return kind_ == Kind::VarOrGlobal ? dyn_cast_or_null<GlobalObjectProperty>(base_) : nullptr; }
Safe
[ "CWE-125", "CWE-787" ]
hermes
091835377369c8fd5917d9b87acffa721ad2a168
1.594908877321184e+38
5
Correctly restore whether or not a function is an inner generator Summary: If a generator was large enough to be lazily compiled, we would lose that information when reconstituting the function's context. This meant the function was generated as a regular function instead of a generator. #utd-hermes-ignore-android Reviewed By: tmikov Differential Revision: D23580247 fbshipit-source-id: af5628bf322cbdc7c7cdfbb5f8d0756328518ea1
0
L16toY(LogLuvState* sp, uint8* op, tmsize_t n) { int16* l16 = (int16*) sp->tbuf; float* yp = (float*) op; while (n-- > 0) *yp++ = (float)LogL16toY(*l16++); }
Safe
[ "CWE-787" ]
libtiff
aaab5c3c9d2a2c6984f23ccbc79702610439bc65
2.1717090445723136e+38
8
* libtiff/tif_luv.c: fix potential out-of-bound writes in decode functions in non debug builds by replacing assert()s by regular if checks (bugzilla #2522). Fix potential out-of-bound reads in case of short input data.
0
p11_rpc_message_write_ulong_buffer (p11_rpc_message *msg, CK_ULONG count) { assert (msg != NULL); assert (msg->output != NULL); /* Make sure this is in the right order */ assert (!msg->signature || p11_rpc_message_verify_part (msg, "fu")); p11_rpc_buffer_add_uint32 (msg->output, count); return !p11_buffer_failed (msg->output); }
Safe
[ "CWE-787" ]
p11-kit
2617f3ef888e103324a28811886b99ed0a56346d
2.631995804439139e+38
11
Check attribute length against buffer size If an attribute's length does not match the length of the byte array inside it, one length was used for allocation, and the other was used for memcpy. This additional check will instead return an error on malformed messages.
0
execute_arith_command (arith_command) ARITH_COM *arith_command; { int expok, save_line_number, retval; intmax_t expresult; WORD_LIST *new; char *exp; expresult = 0; save_line_number = line_number; this_command_name = "(("; /* )) */ line_number = arith_command->line; /* If we're in a function, update the line number information. */ if (variable_context && interactive_shell) line_number -= function_line_number; command_string_index = 0; print_arith_command (arith_command->exp); if (signal_in_progress (DEBUG_TRAP) == 0) { FREE (the_printed_command_except_trap); the_printed_command_except_trap = savestring (the_printed_command); } /* Run the debug trap before each arithmetic command, but do it after we update the line number information and before we expand the various words in the expression. */ retval = run_debug_trap (); #if defined (DEBUGGER) /* In debugging mode, if the DEBUG trap returns a non-zero status, we skip the command. */ if (debugging_mode && retval != EXECUTION_SUCCESS) { line_number = save_line_number; return (EXECUTION_SUCCESS); } #endif new = expand_words_no_vars (arith_command->exp); /* If we're tracing, make a new word list with `((' at the front and `))' at the back and print it. */ if (echo_command_at_execute) xtrace_print_arith_cmd (new); if (new) { exp = new->next ? string_list (new) : new->word->word; expresult = evalexp (exp, &expok); line_number = save_line_number; if (exp != new->word->word) free (exp); dispose_words (new); } else { expresult = 0; expok = 1; } if (expok == 0) return (EXECUTION_FAILURE); return (expresult == 0 ? EXECUTION_FAILURE : EXECUTION_SUCCESS); }
Vulnerable
[]
bash
955543877583837c85470f7fb8a97b7aa8d45e6c
1.0133399323310547e+38
67
bash-4.4-rc2 release
1
static int get_master_version_and_clock(MYSQL* mysql, Master_info* mi) { char err_buff[MAX_SLAVE_ERRMSG]; const char* errmsg= 0; int err_code= 0; int version_number=0; version_number= atoi(mysql->server_version); MYSQL_RES *master_res= 0; MYSQL_ROW master_row; DBUG_ENTER("get_master_version_and_clock"); /* Free old mi_description_event (that is needed if we are in a reconnection). */ DBUG_EXECUTE_IF("unrecognized_master_version", { version_number= 1; };); mysql_mutex_lock(&mi->data_lock); mi->set_mi_description_event(NULL); if (!my_isdigit(&my_charset_bin,*mysql->server_version)) { errmsg = "Master reported unrecognized MySQL version"; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, ER(err_code), errmsg); } else { /* Note the following switch will bug when we have MySQL branch 30 ;) */ switch (version_number) { case 0: case 1: case 2: errmsg = "Master reported unrecognized MySQL version"; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, ER(err_code), errmsg); break; case 3: mi->set_mi_description_event(new Format_description_log_event(1, mysql->server_version)); break; case 4: mi->set_mi_description_event(new Format_description_log_event(3, mysql->server_version)); break; default: /* Master is MySQL >=5.0. Give a default Format_desc event, so that we can take the early steps (like tests for "is this a 3.23 master") which we have to take before we receive the real master's Format_desc which will override this one. Note that the Format_desc we create below is garbage (it has the format of the *slave*); it's only good to help know if the master is 3.23, 4.0, etc. */ mi->set_mi_description_event(new Format_description_log_event(4, mysql->server_version)); break; } } /* This does not mean that a 5.0 slave will be able to read a 5.5 master; but as we don't know yet, we don't want to forbid this for now. If a 5.0 slave can't read a 5.5 master, this will show up when the slave can't read some events sent by the master, and there will be error messages. */ if (errmsg) { /* unlock the mutex on master info structure */ mysql_mutex_unlock(&mi->data_lock); goto err; } /* as we are here, we tried to allocate the event */ if (mi->get_mi_description_event() == NULL) { mysql_mutex_unlock(&mi->data_lock); errmsg= "default Format_description_log_event"; err_code= ER_SLAVE_CREATE_EVENT_FAILURE; sprintf(err_buff, ER(err_code), errmsg); goto err; } /* FD_q's (A) is set initially from RL's (A): FD_q.(A) := RL.(A). It's necessary to adjust FD_q.(A) at this point because in the following course FD_q is going to be dumped to RL. Generally FD_q is derived from a received FD_m (roughly FD_q := FD_m) in queue_event and the master's (A) is installed. At one step with the assignment the Relay-Log's checksum alg is set to a new value: RL.(A) := FD_q.(A). If the slave service is stopped the last time assigned RL.(A) will be passed over to the restarting service (to the current execution point). RL.A is a "codec" to verify checksum in queue_event() almost all the time the first fake Rotate event. Starting from this point IO thread will executes the following checksum warmup sequence of actions: FD_q.A := RL.A, A_m^0 := master.@@global.binlog_checksum, {queue_event(R_f): verifies(R_f, A_m^0)}, {queue_event(FD_m): verifies(FD_m, FD_m.A), dump(FD_q), rotate(RL), FD_q := FD_m, RL.A := FD_q.A)} See legends definition on MYSQL_BIN_LOG::relay_log_checksum_alg docs lines (binlog.h). In above A_m^0 - the value of master's @@binlog_checksum determined in the upcoming handshake (stored in mi->checksum_alg_before_fd). After the warm-up sequence IO gets to "normal" checksum verification mode to use RL.A in {queue_event(E_m): verifies(E_m, RL.A)} until it has received a new FD_m. */ mi->get_mi_description_event()->checksum_alg= mi->rli->relay_log.relay_log_checksum_alg; DBUG_ASSERT(mi->get_mi_description_event()->checksum_alg != BINLOG_CHECKSUM_ALG_UNDEF); DBUG_ASSERT(mi->rli->relay_log.relay_log_checksum_alg != BINLOG_CHECKSUM_ALG_UNDEF); mysql_mutex_unlock(&mi->data_lock); /* Compare the master and slave's clock. Do not die if master's clock is unavailable (very old master not supporting UNIX_TIMESTAMP()?). */ DBUG_EXECUTE_IF("dbug.before_get_UNIX_TIMESTAMP", { const char act[]= "now " "wait_for signal.get_unix_timestamp"; DBUG_ASSERT(opt_debug_sync_timeout > 0); DBUG_ASSERT(!debug_sync_set_action(current_thd, STRING_WITH_LEN(act))); };); master_res= NULL; if (!mysql_real_query(mysql, STRING_WITH_LEN("SELECT UNIX_TIMESTAMP()")) && (master_res= mysql_store_result(mysql)) && (master_row= mysql_fetch_row(master_res))) { mysql_mutex_lock(&mi->data_lock); mi->clock_diff_with_master= (long) (time((time_t*) 0) - strtoul(master_row[0], 0, 10)); mysql_mutex_unlock(&mi->data_lock); } else if (check_io_slave_killed(mi->info_thd, mi, NULL)) goto slave_killed_err; else if (is_network_error(mysql_errno(mysql))) { mi->report(WARNING_LEVEL, mysql_errno(mysql), "Get master clock failed with error: %s", mysql_error(mysql)); goto network_err; } else { mysql_mutex_lock(&mi->data_lock); mi->clock_diff_with_master= 0; /* The "most sensible" value */ mysql_mutex_unlock(&mi->data_lock); sql_print_warning("\"SELECT UNIX_TIMESTAMP()\" failed on master, " "do not trust column Seconds_Behind_Master of SHOW " "SLAVE STATUS. Error: %s (%d)", mysql_error(mysql), mysql_errno(mysql)); } if (master_res) { mysql_free_result(master_res); master_res= NULL; } /* Check that the master's server id and ours are different. Because if they are equal (which can result from a simple copy of master's datadir to slave, thus copying some my.cnf), replication will work but all events will be skipped. Do not die if SHOW VARIABLES LIKE 'SERVER_ID' fails on master (very old master?). Note: we could have put a @@SERVER_ID in the previous SELECT UNIX_TIMESTAMP() instead, but this would not have worked on 3.23 masters. */ DBUG_EXECUTE_IF("dbug.before_get_SERVER_ID", { const char act[]= "now " "wait_for signal.get_server_id"; DBUG_ASSERT(opt_debug_sync_timeout > 0); DBUG_ASSERT(!debug_sync_set_action(current_thd, STRING_WITH_LEN(act))); };); master_res= NULL; master_row= NULL; if (!mysql_real_query(mysql, STRING_WITH_LEN("SHOW VARIABLES LIKE 'SERVER_ID'")) && (master_res= mysql_store_result(mysql)) && (master_row= mysql_fetch_row(master_res))) { if ((::server_id == (mi->master_id= strtoul(master_row[1], 0, 10))) && !mi->rli->replicate_same_server_id) { errmsg= "The slave I/O thread stops because master and slave have equal \ MySQL server ids; these ids must be different for replication to work (or \ the --replicate-same-server-id option must be used on slave but this does \ not always make sense; please check the manual before using it)."; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, ER(err_code), errmsg); goto err; } } else if (mysql_errno(mysql)) { if (check_io_slave_killed(mi->info_thd, mi, NULL)) goto slave_killed_err; else if (is_network_error(mysql_errno(mysql))) { mi->report(WARNING_LEVEL, mysql_errno(mysql), "Get master SERVER_ID failed with error: %s", mysql_error(mysql)); goto network_err; } /* Fatal error */ errmsg= "The slave I/O thread stops because a fatal error is encountered \ when it try to get the value of SERVER_ID variable from master."; err_code= mysql_errno(mysql); sprintf(err_buff, "%s Error: %s", errmsg, mysql_error(mysql)); goto err; } else if (!master_row && master_res) { mi->report(WARNING_LEVEL, ER_UNKNOWN_SYSTEM_VARIABLE, "Unknown system variable 'SERVER_ID' on master, \ maybe it is a *VERY OLD MASTER*."); } if (master_res) { mysql_free_result(master_res); master_res= NULL; } if (mi->master_id == 0 && mi->ignore_server_ids->dynamic_ids.elements > 0) { errmsg= "Slave configured with server id filtering could not detect the master server id."; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, ER(err_code), errmsg); goto err; } /* Check that the master's global character_set_server and ours are the same. Not fatal if query fails (old master?). Note that we don't check for equality of global character_set_client and collation_connection (neither do we prevent their setting in set_var.cc). That's because from what I (Guilhem) have tested, the global values of these 2 are never used (new connections don't use them). We don't test equality of global collation_database either as it's is going to be deprecated (made read-only) in 4.1 very soon. The test is only relevant if master < 5.0.3 (we'll test only if it's older than the 5 branch; < 5.0.3 was alpha...), as >= 5.0.3 master stores charset info in each binlog event. We don't do it for 3.23 because masters <3.23.50 hang on SELECT @@unknown_var (BUG#7965 - see changelog of 3.23.50). So finally we test only if master is 4.x. */ /* redundant with rest of code but safer against later additions */ if (*mysql->server_version == '3') goto err; if (*mysql->server_version == '4') { master_res= NULL; if (!mysql_real_query(mysql, STRING_WITH_LEN("SELECT @@GLOBAL.COLLATION_SERVER")) && (master_res= mysql_store_result(mysql)) && (master_row= mysql_fetch_row(master_res))) { if (strcmp(master_row[0], global_system_variables.collation_server->name)) { errmsg= "The slave I/O thread stops because master and slave have \ different values for the COLLATION_SERVER global variable. The values must \ be equal for the Statement-format replication to work"; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, ER(err_code), errmsg); goto err; } } else if (check_io_slave_killed(mi->info_thd, mi, NULL)) goto slave_killed_err; else if (is_network_error(mysql_errno(mysql))) { mi->report(WARNING_LEVEL, mysql_errno(mysql), "Get master COLLATION_SERVER failed with error: %s", mysql_error(mysql)); goto network_err; } else if (mysql_errno(mysql) != ER_UNKNOWN_SYSTEM_VARIABLE) { /* Fatal error */ errmsg= "The slave I/O thread stops because a fatal error is encountered \ when it try to get the value of COLLATION_SERVER global variable from master."; err_code= mysql_errno(mysql); sprintf(err_buff, "%s Error: %s", errmsg, mysql_error(mysql)); goto err; } else mi->report(WARNING_LEVEL, ER_UNKNOWN_SYSTEM_VARIABLE, "Unknown system variable 'COLLATION_SERVER' on master, \ maybe it is a *VERY OLD MASTER*. *NOTE*: slave may experience \ inconsistency if replicated data deals with collation."); if (master_res) { mysql_free_result(master_res); master_res= NULL; } } /* Perform analogous check for time zone. Theoretically we also should perform check here to verify that SYSTEM time zones are the same on slave and master, but we can't rely on value of @@system_time_zone variable (it is time zone abbreviation) since it determined at start time and so could differ for slave and master even if they are really in the same system time zone. So we are omiting this check and just relying on documentation. Also according to Monty there are many users who are using replication between servers in various time zones. Hence such check will broke everything for them. (And now everything will work for them because by default both their master and slave will have 'SYSTEM' time zone). This check is only necessary for 4.x masters (and < 5.0.4 masters but those were alpha). */ if (*mysql->server_version == '4') { master_res= NULL; if (!mysql_real_query(mysql, STRING_WITH_LEN("SELECT @@GLOBAL.TIME_ZONE")) && (master_res= mysql_store_result(mysql)) && (master_row= mysql_fetch_row(master_res))) { if (strcmp(master_row[0], global_system_variables.time_zone->get_name()->ptr())) { errmsg= "The slave I/O thread stops because master and slave have \ different values for the TIME_ZONE global variable. The values must \ be equal for the Statement-format replication to work"; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, ER(err_code), errmsg); goto err; } } else if (check_io_slave_killed(mi->info_thd, mi, NULL)) goto slave_killed_err; else if (is_network_error(mysql_errno(mysql))) { mi->report(WARNING_LEVEL, mysql_errno(mysql), "Get master TIME_ZONE failed with error: %s", mysql_error(mysql)); goto network_err; } else { /* Fatal error */ errmsg= "The slave I/O thread stops because a fatal error is encountered \ when it try to get the value of TIME_ZONE global variable from master."; err_code= mysql_errno(mysql); sprintf(err_buff, "%s Error: %s", errmsg, mysql_error(mysql)); goto err; } if (master_res) { mysql_free_result(master_res); master_res= NULL; } } if (mi->heartbeat_period != 0.0) { char llbuf[22]; const char query_format[]= "SET @master_heartbeat_period= %s"; char query[sizeof(query_format) - 2 + sizeof(llbuf)]; /* the period is an ulonglong of nano-secs. */ llstr((ulonglong) (mi->heartbeat_period*1000000000UL), llbuf); sprintf(query, query_format, llbuf); if (mysql_real_query(mysql, query, strlen(query))) { if (check_io_slave_killed(mi->info_thd, mi, NULL)) goto slave_killed_err; if (is_network_error(mysql_errno(mysql))) { mi->report(WARNING_LEVEL, mysql_errno(mysql), "SET @master_heartbeat_period to master failed with error: %s", mysql_error(mysql)); mysql_free_result(mysql_store_result(mysql)); goto network_err; } else { /* Fatal error */ errmsg= "The slave I/O thread stops because a fatal error is encountered " " when it tries to SET @master_heartbeat_period on master."; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, "%s Error: %s", errmsg, mysql_error(mysql)); mysql_free_result(mysql_store_result(mysql)); goto err; } } mysql_free_result(mysql_store_result(mysql)); } /* Querying if master is capable to checksum and notifying it about own CRC-awareness. The master's side instant value of @@global.binlog_checksum is stored in the dump thread's uservar area as well as cached locally to become known in consensus by master and slave. */ if (DBUG_EVALUATE_IF("simulate_slave_unaware_checksum", 0, 1)) { int rc; const char query[]= "SET @master_binlog_checksum= @@global.binlog_checksum"; master_res= NULL; mi->checksum_alg_before_fd= BINLOG_CHECKSUM_ALG_UNDEF; //initially undefined /* @c checksum_alg_before_fd is queried from master in this block. If master is old checksum-unaware the value stays undefined. Once the first FD will be received its alg descriptor will replace the being queried one. */ rc= mysql_real_query(mysql, query, strlen(query)); if (rc != 0) { mi->checksum_alg_before_fd= BINLOG_CHECKSUM_ALG_OFF; if (check_io_slave_killed(mi->info_thd, mi, NULL)) goto slave_killed_err; if (mysql_errno(mysql) == ER_UNKNOWN_SYSTEM_VARIABLE) { // this is tolerable as OM -> NS is supported mi->report(WARNING_LEVEL, mysql_errno(mysql), "Notifying master by %s failed with " "error: %s", query, mysql_error(mysql)); } else { if (is_network_error(mysql_errno(mysql))) { mi->report(WARNING_LEVEL, mysql_errno(mysql), "Notifying master by %s failed with " "error: %s", query, mysql_error(mysql)); mysql_free_result(mysql_store_result(mysql)); goto network_err; } else { errmsg= "The slave I/O thread stops because a fatal error is encountered " "when it tried to SET @master_binlog_checksum on master."; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, "%s Error: %s", errmsg, mysql_error(mysql)); mysql_free_result(mysql_store_result(mysql)); goto err; } } } else { mysql_free_result(mysql_store_result(mysql)); if (!mysql_real_query(mysql, STRING_WITH_LEN("SELECT @master_binlog_checksum")) && (master_res= mysql_store_result(mysql)) && (master_row= mysql_fetch_row(master_res)) && (master_row[0] != NULL)) { mi->checksum_alg_before_fd= (uint8) find_type(master_row[0], &binlog_checksum_typelib, 1) - 1; DBUG_EXECUTE_IF("undefined_algorithm_on_slave", mi->checksum_alg_before_fd = BINLOG_CHECKSUM_ALG_UNDEF;); if(mi->checksum_alg_before_fd == BINLOG_CHECKSUM_ALG_UNDEF) { errmsg= "The slave I/O thread was stopped because a fatal error is encountered " "The checksum algorithm used by master is unknown to slave."; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, "%s Error: %s", errmsg, mysql_error(mysql)); mysql_free_result(mysql_store_result(mysql)); goto err; } // valid outcome is either of DBUG_ASSERT(mi->checksum_alg_before_fd == BINLOG_CHECKSUM_ALG_OFF || mi->checksum_alg_before_fd == BINLOG_CHECKSUM_ALG_CRC32); } else if (check_io_slave_killed(mi->info_thd, mi, NULL)) goto slave_killed_err; else if (is_network_error(mysql_errno(mysql))) { mi->report(WARNING_LEVEL, mysql_errno(mysql), "Get master BINLOG_CHECKSUM failed with error: %s", mysql_error(mysql)); goto network_err; } else { errmsg= "The slave I/O thread stops because a fatal error is encountered " "when it tried to SELECT @master_binlog_checksum."; err_code= ER_SLAVE_FATAL_ERROR; sprintf(err_buff, "%s Error: %s", errmsg, mysql_error(mysql)); mysql_free_result(mysql_store_result(mysql)); goto err; } } if (master_res) { mysql_free_result(master_res); master_res= NULL; } } else mi->checksum_alg_before_fd= BINLOG_CHECKSUM_ALG_OFF; if (DBUG_EVALUATE_IF("simulate_slave_unaware_gtid", 0, 1)) { switch (io_thread_init_command(mi, "SELECT @@GLOBAL.GTID_MODE", ER_UNKNOWN_SYSTEM_VARIABLE, &master_res, &master_row)) { case COMMAND_STATUS_ERROR: DBUG_RETURN(2); case COMMAND_STATUS_ALLOWED_ERROR: // master is old and does not have @@GLOBAL.GTID_MODE mi->master_gtid_mode= 0; break; case COMMAND_STATUS_OK: int typelib_index= find_type(master_row[0], &gtid_mode_typelib, 1); mysql_free_result(master_res); if (typelib_index == 0) { mi->report(ERROR_LEVEL, ER_SLAVE_FATAL_ERROR, "The slave IO thread stops because the master has " "an unknown @@GLOBAL.GTID_MODE."); DBUG_RETURN(1); } mi->master_gtid_mode= typelib_index - 1; break; } if (mi->master_gtid_mode > gtid_mode + 1 || gtid_mode > mi->master_gtid_mode + 1) { mi->report(ERROR_LEVEL, ER_SLAVE_FATAL_ERROR, "The slave IO thread stops because the master has " "@@GLOBAL.GTID_MODE %s and this server has " "@@GLOBAL.GTID_MODE %s", gtid_mode_names[mi->master_gtid_mode], gtid_mode_names[gtid_mode]); DBUG_RETURN(1); } if (mi->is_auto_position() && mi->master_gtid_mode != 3) { mi->report(ERROR_LEVEL, ER_SLAVE_FATAL_ERROR, "The slave IO thread stops because the master has " "@@GLOBAL.GTID_MODE %s and we are trying to connect " "using MASTER_AUTO_POSITION.", gtid_mode_names[mi->master_gtid_mode]); DBUG_RETURN(1); } } err: if (errmsg) { if (master_res) mysql_free_result(master_res); DBUG_ASSERT(err_code != 0); mi->report(ERROR_LEVEL, err_code, "%s", err_buff); DBUG_RETURN(1); } DBUG_RETURN(0); network_err: if (master_res) mysql_free_result(master_res); DBUG_RETURN(2); slave_killed_err: if (master_res) mysql_free_result(master_res); DBUG_RETURN(2); }
Safe
[ "CWE-284", "CWE-295" ]
mysql-server
3bd5589e1a5a93f9c224badf983cd65c45215390
6.309063655656856e+37
599
WL#6791 : Redefine client --ssl option to imply enforced encryption # Changed the meaning of the --ssl=1 option of all client binaries to mean force ssl, not try ssl and fail over to eunecrypted # Added a new MYSQL_OPT_SSL_ENFORCE mysql_options() option to specify that an ssl connection is required. # Added a new macro SSL_SET_OPTIONS() to the client SSL handling headers that sets all the relevant SSL options at once. # Revamped all of the current native clients to use the new macro # Removed some Windows line endings. # Added proper handling of the new option into the ssl helper headers. # If SSL is mandatory assume that the media is secure enough for the sha256 plugin to do unencrypted password exchange even before establishing a connection. # Set the default ssl cipher to DHE-RSA-AES256-SHA if none is specified. # updated test cases that require a non-default cipher to spawn a mysql command line tool binary since mysqltest has no support for specifying ciphers. # updated the replication slave connection code to always enforce SSL if any of the SSL config options is present. # test cases added and updated. # added a mysql_get_option() API to return mysql_options() values. Used the new API inside the sha256 plugin. # Fixed compilation warnings because of unused variables. # Fixed test failures (mysql_ssl and bug13115401) # Fixed whitespace issues. # Fully implemented the mysql_get_option() function. # Added a test case for mysql_get_option() # fixed some trailing whitespace issues # fixed some uint/int warnings in mysql_client_test.c # removed shared memory option from non-windows get_options tests # moved MYSQL_OPT_LOCAL_INFILE to the uint options
0
static int tun_chr_open(struct inode *inode, struct file * file) { struct net *net = current->nsproxy->net_ns; struct tun_file *tfile; DBG1(KERN_INFO, "tunX: tun_chr_open\n"); tfile = (struct tun_file *)sk_alloc(net, AF_UNSPEC, GFP_KERNEL, &tun_proto, 0); if (!tfile) return -ENOMEM; RCU_INIT_POINTER(tfile->tun, NULL); tfile->flags = 0; tfile->ifindex = 0; init_waitqueue_head(&tfile->wq.wait); RCU_INIT_POINTER(tfile->socket.wq, &tfile->wq); tfile->socket.file = file; tfile->socket.ops = &tun_socket_ops; sock_init_data(&tfile->socket, &tfile->sk); tfile->sk.sk_write_space = tun_sock_write_space; tfile->sk.sk_sndbuf = INT_MAX; file->private_data = tfile; INIT_LIST_HEAD(&tfile->next); sock_set_flag(&tfile->sk, SOCK_ZEROCOPY); return 0; }
Safe
[ "CWE-476" ]
linux
0ad646c81b2182f7fa67ec0c8c825e0ee165696d
1.2137287052912141e+38
33
tun: call dev_get_valid_name() before register_netdevice() register_netdevice() could fail early when we have an invalid dev name, in which case ->ndo_uninit() is not called. For tun device, this is a problem because a timer etc. are already initialized and it expects ->ndo_uninit() to clean them up. We could move these initializations into a ->ndo_init() so that register_netdevice() knows better, however this is still complicated due to the logic in tun_detach(). Therefore, I choose to just call dev_get_valid_name() before register_netdevice(), which is quicker and much easier to audit. And for this specific case, it is already enough. Fixes: 96442e42429e ("tuntap: choose the txq based on rxq") Reported-by: Dmitry Alexeev <[email protected]> Cc: Jason Wang <[email protected]> Cc: "Michael S. Tsirkin" <[email protected]> Signed-off-by: Cong Wang <[email protected]> Signed-off-by: David S. Miller <[email protected]>
0
pthread_handler_t handle_connections_namedpipes(void *arg) { HANDLE hConnectedPipe; OVERLAPPED connectOverlapped= {0}; THD *thd; my_thread_init(); DBUG_ENTER("handle_connections_namedpipes"); connectOverlapped.hEvent= CreateEvent(NULL, TRUE, FALSE, NULL); if (!connectOverlapped.hEvent) { sql_print_error("Can't create event, last error=%u", GetLastError()); unireg_abort(1); } DBUG_PRINT("general",("Waiting for named pipe connections.")); while (!abort_loop) { /* wait for named pipe connection */ BOOL fConnected= ConnectNamedPipe(hPipe, &connectOverlapped); if (!fConnected && (GetLastError() == ERROR_IO_PENDING)) { /* ERROR_IO_PENDING says async IO has started but not yet finished. GetOverlappedResult will wait for completion. */ DWORD bytes; fConnected= GetOverlappedResult(hPipe, &connectOverlapped,&bytes, TRUE); } if (abort_loop) break; if (!fConnected) fConnected = GetLastError() == ERROR_PIPE_CONNECTED; if (!fConnected) { CloseHandle(hPipe); if ((hPipe= CreateNamedPipe(pipe_name, PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED, PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT, PIPE_UNLIMITED_INSTANCES, (int) global_system_variables. net_buffer_length, (int) global_system_variables. net_buffer_length, NMPWAIT_USE_DEFAULT_WAIT, &saPipeSecurity)) == INVALID_HANDLE_VALUE) { sql_perror("Can't create new named pipe!"); break; // Abort } } hConnectedPipe = hPipe; /* create new pipe for new connection */ if ((hPipe = CreateNamedPipe(pipe_name, PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED, PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT, PIPE_UNLIMITED_INSTANCES, (int) global_system_variables.net_buffer_length, (int) global_system_variables.net_buffer_length, NMPWAIT_USE_DEFAULT_WAIT, &saPipeSecurity)) == INVALID_HANDLE_VALUE) { sql_perror("Can't create new named pipe!"); hPipe=hConnectedPipe; continue; // We have to try again } if (!(thd = new THD)) { DisconnectNamedPipe(hConnectedPipe); CloseHandle(hConnectedPipe); continue; } if (!(thd->net.vio= vio_new_win32pipe(hConnectedPipe)) || my_net_init(&thd->net, thd->net.vio)) { close_connection(thd, ER_OUT_OF_RESOURCES); delete thd; continue; } /* Host is unknown */ thd->security_ctx->set_host(my_strdup(my_localhost, MYF(0))); create_new_thread(thd); } CloseHandle(connectOverlapped.hEvent); DBUG_LEAVE; decrement_handler_count(); return 0; }
Safe
[ "CWE-264" ]
mysql-server
48bd8b16fe382be302c6f0b45931be5aa6f29a0e
2.1378428825270236e+38
95
Bug#24388753: PRIVILEGE ESCALATION USING MYSQLD_SAFE [This is the 5.5/5.6 version of the bugfix]. The problem was that it was possible to write log files ending in .ini/.cnf that later could be parsed as an options file. This made it possible for users to specify startup options without the permissions to do so. This patch fixes the problem by disallowing general query log and slow query log to be written to files ending in .ini and .cnf.
0
static void draw_fill_color_rgb( wmfAPI* API, const wmfRGB* rgb ) { PixelWand *fill_color; fill_color=NewPixelWand(); PixelSetRedQuantum(fill_color,ScaleCharToQuantum(rgb->r)); PixelSetGreenQuantum(fill_color,ScaleCharToQuantum(rgb->g)); PixelSetBlueQuantum(fill_color,ScaleCharToQuantum(rgb->b)); PixelSetAlphaQuantum(fill_color,OpaqueAlpha); DrawSetFillColor(WmfDrawingWand,fill_color); fill_color=DestroyPixelWand(fill_color); }
Safe
[ "CWE-772" ]
ImageMagick
b2b48d50300a9fbcd0aa0d9230fd6d7a08f7671e
1.4016152749953265e+38
13
https://github.com/ImageMagick/ImageMagick/issues/544
0
void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long addr, unsigned long end, struct zap_details *details) { pgd_t *pgd; unsigned long next; BUG_ON(addr >= end); tlb_start_vma(tlb, vma); pgd = pgd_offset(vma->vm_mm, addr); do { next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) continue; next = zap_p4d_range(tlb, vma, pgd, addr, next, details); } while (pgd++, addr = next, addr != end); tlb_end_vma(tlb, vma); }
Safe
[ "CWE-119" ]
linux
1be7107fbe18eed3e319a6c3e83c78254b693acb
2.328621296488346e+38
19
mm: larger stack guard gap, between vmas Stack guard page is a useful feature to reduce a risk of stack smashing into a different mapping. We have been using a single page gap which is sufficient to prevent having stack adjacent to a different mapping. But this seems to be insufficient in the light of the stack usage in userspace. E.g. glibc uses as large as 64kB alloca() in many commonly used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX] which is 256kB or stack strings with MAX_ARG_STRLEN. This will become especially dangerous for suid binaries and the default no limit for the stack size limit because those applications can be tricked to consume a large portion of the stack and a single glibc call could jump over the guard page. These attacks are not theoretical, unfortunatelly. Make those attacks less probable by increasing the stack guard gap to 1MB (on systems with 4k pages; but make it depend on the page size because systems with larger base pages might cap stack allocations in the PAGE_SIZE units) which should cover larger alloca() and VLA stack allocations. It is obviously not a full fix because the problem is somehow inherent, but it should reduce attack space a lot. One could argue that the gap size should be configurable from userspace, but that can be done later when somebody finds that the new 1MB is wrong for some special case applications. For now, add a kernel command line option (stack_guard_gap) to specify the stack gap size (in page units). Implementation wise, first delete all the old code for stack guard page: because although we could get away with accounting one extra page in a stack vma, accounting a larger gap can break userspace - case in point, a program run with "ulimit -S -v 20000" failed when the 1MB gap was counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK and strict non-overcommit mode. Instead of keeping gap inside the stack vma, maintain the stack guard gap as a gap between vmas: using vm_start_gap() in place of vm_start (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few places which need to respect the gap - mainly arch_get_unmapped_area(), and and the vma tree's subtree_gap support for that. Original-patch-by: Oleg Nesterov <[email protected]> Original-patch-by: Michal Hocko <[email protected]> Signed-off-by: Hugh Dickins <[email protected]> Acked-by: Michal Hocko <[email protected]> Tested-by: Helge Deller <[email protected]> # parisc Signed-off-by: Linus Torvalds <[email protected]>
0
static int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb, union futex_key *key, struct futex_pi_state **ps, struct task_struct *task, int set_waiters) { int lock_taken, ret, ownerdied = 0; u32 uval, newval, curval, vpid = task_pid_vnr(task); retry: ret = lock_taken = 0; /* * To avoid races, we attempt to take the lock here again * (by doing a 0 -> TID atomic cmpxchg), while holding all * the locks. It will most likely not succeed. */ newval = vpid; if (set_waiters) newval |= FUTEX_WAITERS; if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, 0, newval))) return -EFAULT; /* * Detect deadlocks. */ if ((unlikely((curval & FUTEX_TID_MASK) == vpid))) return -EDEADLK; /* * Surprise - we got the lock. Just return to userspace: */ if (unlikely(!curval)) return 1; uval = curval; /* * Set the FUTEX_WAITERS flag, so the owner will know it has someone * to wake at the next unlock. */ newval = curval | FUTEX_WAITERS; /* * There are two cases, where a futex might have no owner (the * owner TID is 0): OWNER_DIED. We take over the futex in this * case. We also do an unconditional take over, when the owner * of the futex died. * * This is safe as we are protected by the hash bucket lock ! */ if (unlikely(ownerdied || !(curval & FUTEX_TID_MASK))) { /* Keep the OWNER_DIED bit */ newval = (curval & ~FUTEX_TID_MASK) | vpid; ownerdied = 0; lock_taken = 1; } if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))) return -EFAULT; if (unlikely(curval != uval)) goto retry; /* * We took the lock due to owner died take over. */ if (unlikely(lock_taken)) return 1; /* * We dont have the lock. Look up the PI state (or create it if * we are the first waiter): */ ret = lookup_pi_state(uval, hb, key, ps); if (unlikely(ret)) { switch (ret) { case -ESRCH: /* * No owner found for this futex. Check if the * OWNER_DIED bit is set to figure out whether * this is a robust futex or not. */ if (get_futex_value_locked(&curval, uaddr)) return -EFAULT; /* * We simply start over in case of a robust * futex. The code above will take the futex * and return happy. */ if (curval & FUTEX_OWNER_DIED) { ownerdied = 1; goto retry; } default: break; } } return ret; }
Safe
[ "CWE-20" ]
linux
6f7b0a2a5c0fb03be7c25bd1745baa50582348ef
2.260460484582552e+38
102
futex: Forbid uaddr == uaddr2 in futex_wait_requeue_pi() If uaddr == uaddr2, then we have broken the rule of only requeueing from a non-pi futex to a pi futex with this call. If we attempt this, as the trinity test suite manages to do, we miss early wakeups as q.key is equal to key2 (because they are the same uaddr). We will then attempt to dereference the pi_mutex (which would exist had the futex_q been properly requeued to a pi futex) and trigger a NULL pointer dereference. Signed-off-by: Darren Hart <[email protected]> Cc: Dave Jones <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/ad82bfe7f7d130247fbe2b5b4275654807774227.1342809673.git.dvhart@linux.intel.com Signed-off-by: Thomas Gleixner <[email protected]>
0
static uint32_t cirrus_vga_mem_readw(void *opaque, target_phys_addr_t addr) { uint32_t v; #ifdef TARGET_WORDS_BIGENDIAN v = cirrus_vga_mem_readb(opaque, addr) << 8; v |= cirrus_vga_mem_readb(opaque, addr + 1); #else v = cirrus_vga_mem_readb(opaque, addr); v |= cirrus_vga_mem_readb(opaque, addr + 1) << 8; #endif return v; }
Safe
[ "CWE-787" ]
qemu
b2eb849d4b1fdb6f35d5c46958c7f703cf64cfef
7.717098928037634e+37
12
CVE-2007-1320 - Cirrus LGD-54XX "bitblt" heap overflow I have just noticed that patch for CVE-2007-1320 has never been applied to the QEMU CVS. Please find it below. | Multiple heap-based buffer overflows in the cirrus_invalidate_region | function in the Cirrus VGA extension in QEMU 0.8.2, as used in Xen and | possibly other products, might allow local users to execute arbitrary | code via unspecified vectors related to "attempting to mark | non-existent regions as dirty," aka the "bitblt" heap overflow. git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@4340 c046a42c-6fe2-441c-8c8c-71466251a162
0
resp_status (const struct response *resp, char **message) { int status; const char *p, *end; if (!resp->headers) { /* For a HTTP/0.9 response, assume status 200. */ if (message) *message = xstrdup (_("No headers, assuming HTTP/0.9")); return 200; } p = resp->headers[0]; end = resp->headers[1]; if (!end) return -1; /* "HTTP" */ if (end - p < 4 || 0 != strncmp (p, "HTTP", 4)) return -1; p += 4; /* Match the HTTP version. This is optional because Gnutella servers have been reported to not specify HTTP version. */ if (p < end && *p == '/') { ++p; while (p < end && c_isdigit (*p)) ++p; if (p < end && *p == '.') ++p; while (p < end && c_isdigit (*p)) ++p; } while (p < end && c_isspace (*p)) ++p; if (end - p < 3 || !c_isdigit (p[0]) || !c_isdigit (p[1]) || !c_isdigit (p[2])) return -1; status = 100 * (p[0] - '0') + 10 * (p[1] - '0') + (p[2] - '0'); p += 3; if (message) { while (p < end && c_isspace (*p)) ++p; while (p < end && c_isspace (end[-1])) --end; *message = strdupdelim (p, end); } return status; }
Safe
[ "CWE-20" ]
wget
3e25a9817f47fbb8660cc6a3b2f3eea239526c6c
2.710060213506235e+37
56
Introduce --trust-server-names. Close CVE-2010-2252.
0
static int send_request(struct avdtp *session, gboolean priority, struct avdtp_stream *stream, uint8_t signal_id, void *buffer, size_t size) { struct pending_req *req; if (stream && stream->abort_int && signal_id != AVDTP_ABORT) { DBG("Unable to send requests while aborting"); return -EINVAL; } req = g_new0(struct pending_req, 1); req->signal_id = signal_id; req->data = g_malloc(size); memcpy(req->data, buffer, size); req->data_size = size; req->stream = stream; return send_req(session, priority, req); }
Safe
[ "CWE-703" ]
bluez
7a80d2096f1b7125085e21448112aa02f49f5e9a
3.3073204115560258e+38
20
avdtp: Fix accepting invalid/malformed capabilities Check if capabilities are valid before attempting to copy them.
0
int sqlite3VdbeIdxRowid(sqlite3 *db, BtCursor *pCur, i64 *rowid){ i64 nCellKey = 0; int rc; u32 szHdr; /* Size of the header */ u32 typeRowid; /* Serial type of the rowid */ u32 lenRowid; /* Size of the rowid */ Mem m, v; /* Get the size of the index entry. Only indices entries of less ** than 2GiB are support - anything large must be database corruption. ** Any corruption is detected in sqlite3BtreeParseCellPtr(), though, so ** this code can safely assume that nCellKey is 32-bits */ assert( sqlite3BtreeCursorIsValid(pCur) ); nCellKey = sqlite3BtreePayloadSize(pCur); assert( (nCellKey & SQLITE_MAX_U32)==(u64)nCellKey ); /* Read in the complete content of the index entry */ sqlite3VdbeMemInit(&m, db, 0); rc = sqlite3VdbeMemFromBtree(pCur, 0, (u32)nCellKey, &m); if( rc ){ return rc; } /* The index entry must begin with a header size */ (void)getVarint32((u8*)m.z, szHdr); testcase( szHdr==3 ); testcase( szHdr==m.n ); testcase( szHdr>0x7fffffff ); assert( m.n>=0 ); if( unlikely(szHdr<3 || szHdr>(unsigned)m.n) ){ goto idx_rowid_corruption; } /* The last field of the index should be an integer - the ROWID. ** Verify that the last entry really is an integer. */ (void)getVarint32((u8*)&m.z[szHdr-1], typeRowid); testcase( typeRowid==1 ); testcase( typeRowid==2 ); testcase( typeRowid==3 ); testcase( typeRowid==4 ); testcase( typeRowid==5 ); testcase( typeRowid==6 ); testcase( typeRowid==8 ); testcase( typeRowid==9 ); if( unlikely(typeRowid<1 || typeRowid>9 || typeRowid==7) ){ goto idx_rowid_corruption; } lenRowid = sqlite3SmallTypeSizes[typeRowid]; testcase( (u32)m.n==szHdr+lenRowid ); if( unlikely((u32)m.n<szHdr+lenRowid) ){ goto idx_rowid_corruption; } /* Fetch the integer off the end of the index record */ sqlite3VdbeSerialGet((u8*)&m.z[m.n-lenRowid], typeRowid, &v); *rowid = v.u.i; sqlite3VdbeMemRelease(&m); return SQLITE_OK; /* Jump here if database corruption is detected after m has been ** allocated. Free the m object and return SQLITE_CORRUPT. */ idx_rowid_corruption: testcase( m.szMalloc!=0 ); sqlite3VdbeMemRelease(&m); return SQLITE_CORRUPT_BKPT; }
Safe
[ "CWE-755" ]
sqlite
8654186b0236d556aa85528c2573ee0b6ab71be3
7.543958645760078e+37
67
When an error occurs while rewriting the parser tree for window functions in the sqlite3WindowRewrite() routine, make sure that pParse->nErr is set, and make sure that this shuts down any subsequent code generation that might depend on the transformations that were implemented. This fixes a problem discovered by the Yongheng and Rui fuzzer. FossilOrigin-Name: e2bddcd4c55ba3cbe0130332679ff4b048630d0ced9a8899982edb5a3569ba7f
0
static int packet_sendmsg_spkt(struct socket *sock, struct msghdr *msg, size_t len) { struct sock *sk = sock->sk; DECLARE_SOCKADDR(struct sockaddr_pkt *, saddr, msg->msg_name); struct sk_buff *skb = NULL; struct net_device *dev; struct sockcm_cookie sockc; __be16 proto = 0; int err; int extra_len = 0; /* * Get and verify the address. */ if (saddr) { if (msg->msg_namelen < sizeof(struct sockaddr)) return -EINVAL; if (msg->msg_namelen == sizeof(struct sockaddr_pkt)) proto = saddr->spkt_protocol; } else return -ENOTCONN; /* SOCK_PACKET must be sent giving an address */ /* * Find the device first to size check it */ saddr->spkt_device[sizeof(saddr->spkt_device) - 1] = 0; retry: rcu_read_lock(); dev = dev_get_by_name_rcu(sock_net(sk), saddr->spkt_device); err = -ENODEV; if (dev == NULL) goto out_unlock; err = -ENETDOWN; if (!(dev->flags & IFF_UP)) goto out_unlock; /* * You may not queue a frame bigger than the mtu. This is the lowest level * raw protocol and you must do your own fragmentation at this level. */ if (unlikely(sock_flag(sk, SOCK_NOFCS))) { if (!netif_supports_nofcs(dev)) { err = -EPROTONOSUPPORT; goto out_unlock; } extra_len = 4; /* We're doing our own CRC */ } err = -EMSGSIZE; if (len > dev->mtu + dev->hard_header_len + VLAN_HLEN + extra_len) goto out_unlock; if (!skb) { size_t reserved = LL_RESERVED_SPACE(dev); int tlen = dev->needed_tailroom; unsigned int hhlen = dev->header_ops ? dev->hard_header_len : 0; rcu_read_unlock(); skb = sock_wmalloc(sk, len + reserved + tlen, 0, GFP_KERNEL); if (skb == NULL) return -ENOBUFS; /* FIXME: Save some space for broken drivers that write a hard * header at transmission time by themselves. PPP is the notable * one here. This should really be fixed at the driver level. */ skb_reserve(skb, reserved); skb_reset_network_header(skb); /* Try to align data part correctly */ if (hhlen) { skb->data -= hhlen; skb->tail -= hhlen; if (len < hhlen) skb_reset_network_header(skb); } err = memcpy_from_msg(skb_put(skb, len), msg, len); if (err) goto out_free; goto retry; } if (!dev_validate_header(dev, skb->data, len)) { err = -EINVAL; goto out_unlock; } if (len > (dev->mtu + dev->hard_header_len + extra_len) && !packet_extra_vlan_len_allowed(dev, skb)) { err = -EMSGSIZE; goto out_unlock; } sockcm_init(&sockc, sk); if (msg->msg_controllen) { err = sock_cmsg_send(sk, msg, &sockc); if (unlikely(err)) goto out_unlock; } skb->protocol = proto; skb->dev = dev; skb->priority = sk->sk_priority; skb->mark = sk->sk_mark; skb->tstamp = sockc.transmit_time; skb_setup_tx_timestamp(skb, sockc.tsflags); if (unlikely(extra_len == 4)) skb->no_fcs = 1; packet_parse_headers(skb, sock); dev_queue_xmit(skb); rcu_read_unlock(); return len; out_unlock: rcu_read_unlock(); out_free: kfree_skb(skb); return err; }
Safe
[ "CWE-787" ]
linux
acf69c946233259ab4d64f8869d4037a198c7f06
8.4788405253634045e+37
126
net/packet: fix overflow in tpacket_rcv Using tp_reserve to calculate netoff can overflow as tp_reserve is unsigned int and netoff is unsigned short. This may lead to macoff receving a smaller value then sizeof(struct virtio_net_hdr), and if po->has_vnet_hdr is set, an out-of-bounds write will occur when calling virtio_net_hdr_from_skb. The bug is fixed by converting netoff to unsigned int and checking if it exceeds USHRT_MAX. This addresses CVE-2020-14386 Fixes: 8913336a7e8d ("packet: add PACKET_RESERVE sockopt") Signed-off-by: Or Cohen <[email protected]> Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
0
static apr_status_t modsecurity_request_body_store_memory(modsec_rec *msr, const char *data, apr_size_t length, char **error_msg) { *error_msg = NULL; /* Would storing this chunk mean going over the limit? */ if ((msr->msc_reqbody_spilltodisk) && (msr->msc_reqbody_length + length > (apr_size_t)msr->txcfg->reqbody_inmemory_limit)) { msc_data_chunk **chunks; unsigned int disklen = 0; int i; msr_log(msr, 4, "Input filter: Request too large to store in memory, switching to disk."); /* NOTE Must use modsecurity_request_body_store_disk() here * to prevent data to be sent to the streaming * processors again. */ /* Initialise disk storage */ msr->msc_reqbody_storage = MSC_REQBODY_DISK; if (modsecurity_request_body_start_init(msr, error_msg) < 0) return -1; /* Write the data we keep in memory */ chunks = (msc_data_chunk **)msr->msc_reqbody_chunks->elts; for(i = 0; i < msr->msc_reqbody_chunks->nelts; i++) { disklen += chunks[i]->length; if (modsecurity_request_body_store_disk(msr, chunks[i]->data, chunks[i]->length, error_msg) < 0) { return -1; } free(chunks[i]->data); chunks[i]->data = NULL; } /* Clear the memory pool as we no longer need the bits. */ /* IMP1 But since we only used apr_pool_clear memory might * not be released back to the OS straight away? */ msr->msc_reqbody_chunks = NULL; apr_pool_clear(msr->msc_reqbody_mp); msr_log(msr, 4, "Input filter: Wrote %u bytes from memory to disk.", disklen); /* Continue with disk storage from now on */ return modsecurity_request_body_store_disk(msr, data, length, error_msg); } /* If we're here that means we are not over the * request body in-memory limit yet. */ { unsigned long int bucket_offset, bucket_left; bucket_offset = 0; bucket_left = length; /* Although we store the request body in chunks we don't * want to use the same chunk sizes as the incoming memory * buffers. They are often of very small sizes and that * would make us waste a lot of memory. That's why we * use our own chunks of CHUNK_CAPACITY sizes. */ /* Loop until we empty this bucket into our chunks. */ while(bucket_left > 0) { /* Allocate a new chunk if we have to. */ if (msr->msc_reqbody_chunk_current == NULL) { msr->msc_reqbody_chunk_current = (msc_data_chunk *) apr_pcalloc(msr->msc_reqbody_mp, sizeof(msc_data_chunk)); if (msr->msc_reqbody_chunk_current == NULL) { *error_msg = apr_psprintf(msr->mp, "Input filter: Failed to allocate %lu bytes " "for request body chunk.", (unsigned long)sizeof(msc_data_chunk)); return -1; } msr->msc_reqbody_chunk_current->data = malloc(CHUNK_CAPACITY); if (msr->msc_reqbody_chunk_current->data == NULL) { *error_msg = apr_psprintf(msr->mp, "Input filter: Failed to allocate %d bytes " "for request body chunk data.", CHUNK_CAPACITY); return -1; } msr->msc_reqbody_chunk_current->length = 0; msr->msc_reqbody_chunk_current->is_permanent = 1; *(const msc_data_chunk **)apr_array_push(msr->msc_reqbody_chunks) = msr->msc_reqbody_chunk_current; } if (bucket_left < (CHUNK_CAPACITY - msr->msc_reqbody_chunk_current->length)) { /* There's enough space in the current chunk. */ memcpy(msr->msc_reqbody_chunk_current->data + msr->msc_reqbody_chunk_current->length, data + bucket_offset, bucket_left); msr->msc_reqbody_chunk_current->length += bucket_left; bucket_left = 0; } else { /* Fill the existing chunk. */ unsigned long int copy_length = CHUNK_CAPACITY - msr->msc_reqbody_chunk_current->length; memcpy(msr->msc_reqbody_chunk_current->data + msr->msc_reqbody_chunk_current->length, data + bucket_offset, copy_length); bucket_offset += copy_length; bucket_left -= copy_length; msr->msc_reqbody_chunk_current->length += copy_length; /* We're done with this chunk. Setting the pointer * to NULL is going to force a new chunk to be allocated * on the next go. */ msr->msc_reqbody_chunk_current = NULL; } } msr->msc_reqbody_length += length; } return 1; }
Vulnerable
[ "CWE-476" ]
ModSecurity
0840b13612a0b7ef1ce7441cf811dcfc6b463fba
9.527462758098152e+37
123
Fixed: chuck null pointer when unknown CT is sent and over in-memory limit
1
clean_key (kbnode_t keyblock, int noisy, int self_only, int *uids_cleaned, int *sigs_cleaned) { kbnode_t uidnode; merge_keys_and_selfsig (keyblock); for (uidnode = keyblock->next; uidnode && uidnode->pkt->pkttype != PKT_PUBLIC_SUBKEY; uidnode = uidnode->next) { if (uidnode->pkt->pkttype == PKT_USER_ID) clean_one_uid (keyblock, uidnode,noisy, self_only, uids_cleaned, sigs_cleaned); } }
Safe
[ "CWE-20" ]
gnupg
2183683bd633818dd031b090b5530951de76f392
4.717742338905348e+37
16
Use inline functions to convert buffer data to scalars. * common/host2net.h (buf16_to_ulong, buf16_to_uint): New. (buf16_to_ushort, buf16_to_u16): New. (buf32_to_size_t, buf32_to_ulong, buf32_to_uint, buf32_to_u32): New. -- Commit 91b826a38880fd8a989318585eb502582636ddd8 was not enough to avoid all sign extension on shift problems. Hanno Böck found a case with an invalid read due to this problem. To fix that once and for all almost all uses of "<< 24" and "<< 8" are changed by this patch to use an inline function from host2net.h. Signed-off-by: Werner Koch <[email protected]>
0
GF_Err gf_isom_close(GF_ISOFile *movie) { GF_Err e=GF_OK; if (movie == NULL) return GF_ISOM_INVALID_FILE; e = gf_isom_write(movie); //free and return; gf_isom_delete_movie(movie); return e; }
Safe
[ "CWE-476" ]
gpac
ebfa346eff05049718f7b80041093b4c5581c24e
2.9033603765626016e+38
9
fixed #1706
0
static void unsetJoinExpr(Expr *p, int iTable){ while( p ){ if( ExprHasProperty(p, EP_FromJoin) && (iTable<0 || p->iRightJoinTable==iTable) ){ ExprClearProperty(p, EP_FromJoin); } if( p->op==TK_FUNCTION && p->x.pList ){ int i; for(i=0; i<p->x.pList->nExpr; i++){ unsetJoinExpr(p->x.pList->a[i].pExpr, iTable); } } unsetJoinExpr(p->pLeft, iTable); p = p->pRight; } }
Safe
[ "CWE-20" ]
sqlite
e59c562b3f6894f84c715772c4b116d7b5c01348
1.363614801455577e+38
16
Fix a crash that could occur if a sub-select that uses both DISTINCT and window functions also used an ORDER BY that is the same as its select list. FossilOrigin-Name: bcdd66c1691955c697f3d756c2b035acfe98f6aad72e90b0021bab6e9023b3ba
0
static int ExecuteHelp( SQLHDBC hDbc, char *szSQL, char cDelimiter, int bColumnNames, int bHTMLTable ) { char szTable[250] = ""; SQLHSTMT hStmt; SQLTCHAR szSepLine[32001]; SQLLEN nRows = 0; szSepLine[ 0 ] = 0; /**************************** * EXECUTE SQL ***************************/ if ( SQLAllocStmt( hDbc, &hStmt ) != SQL_SUCCESS ) { if ( bVerbose ) DumpODBCLog( hEnv, hDbc, 0 ); fprintf( stderr, "[ISQL]ERROR: Could not SQLAllocStmt\n" ); return 0; } if ( iniElement( szSQL, ' ', '\0', 1, szTable, sizeof(szTable) ) == INI_SUCCESS ) { SQLWCHAR tname[ 1024 ]; ansi_to_unicode( szTable, tname ); /* COLUMNS */ if ( SQLColumns( hStmt, NULL, 0, NULL, 0, tname, SQL_NTS, NULL, 0 ) != SQL_SUCCESS ) { if ( bVerbose ) DumpODBCLog( hEnv, hDbc, hStmt ); fprintf( stderr, "[ISQL]ERROR: Could not SQLColumns\n" ); SQLFreeStmt( hStmt, SQL_DROP ); return 0; } } else { /* TABLES */ if ( SQLTables( hStmt, NULL, 0, NULL, 0, NULL, 0, NULL, 0 ) != SQL_SUCCESS ) { if ( bVerbose ) DumpODBCLog( hEnv, hDbc, hStmt ); fprintf( stderr, "[ISQL]ERROR: Could not SQLTables\n" ); SQLFreeStmt( hStmt, SQL_DROP ); return 0; } } /**************************** * WRITE HEADER ***************************/ if ( bHTMLTable ) WriteHeaderHTMLTable( hStmt ); else if ( cDelimiter == 0 ) UWriteHeaderNormal( hStmt, szSepLine ); else if ( cDelimiter && bColumnNames ) WriteHeaderDelimited( hStmt, cDelimiter ); /**************************** * WRITE BODY ***************************/ if ( bHTMLTable ) WriteBodyHTMLTable( hStmt ); else if ( cDelimiter == 0 ) nRows = WriteBodyNormal( hStmt ); else WriteBodyDelimited( hStmt, cDelimiter ); /**************************** * WRITE FOOTER ***************************/ if ( bHTMLTable ) WriteFooterHTMLTable( hStmt ); else if ( cDelimiter == 0 ) UWriteFooterNormal( hStmt, szSepLine, nRows ); /**************************** * CLEANUP ***************************/ SQLFreeStmt( hStmt, SQL_DROP ); return 1; }
Safe
[ "CWE-119", "CWE-369" ]
unixODBC
45ef78e037f578b15fc58938a3a3251655e71d6f
1.2478185274560954e+38
80
New Pre Source
0
static int test_split(struct libmnt_test *ts, int argc, char *argv[]) { char *optstr, *user = NULL, *fs = NULL, *vfs = NULL; int rc; if (argc < 2) return -EINVAL; optstr = xstrdup(argv[1]); rc = mnt_split_optstr(optstr, &user, &vfs, &fs, 0, 0); if (!rc) { printf("user : %s\n", user); printf("vfs : %s\n", vfs); printf("fs : %s\n", fs); } free(user); free(vfs); free(fs); free(optstr); return rc; }
Safe
[ "CWE-552", "CWE-703" ]
util-linux
57202f5713afa2af20ffbb6ab5331481d0396f8d
2.03797798810618e+38
23
libmount: fix UID check for FUSE umount [CVE-2021-3995] Improper UID check allows an unprivileged user to unmount FUSE filesystems of users with similar UID. Signed-off-by: Karel Zak <[email protected]>
0
apply_background_to_window (GSManager *manager, GSWindow *window) { GdkPixmap *pixmap; GdkScreen *screen; int width; int height; if (manager->priv->bg == NULL) { gs_debug ("No background available"); gs_window_set_background_pixmap (window, NULL); } screen = gs_window_get_screen (window); width = gdk_screen_get_width (screen); height = gdk_screen_get_height (screen); gs_debug ("Creating pixmap background w:%d h:%d", width, height); pixmap = gnome_bg_create_pixmap (manager->priv->bg, gs_window_get_gdk_window (window), width, height, TRUE); gs_window_set_background_pixmap (window, pixmap); g_object_unref (pixmap); }
Safe
[]
gnome-screensaver
f6d3defdc7080a540d7f8df15dc309a9364ae668
1.5371575667751542e+37
25
Create or remove windows as number of monitors changes due to randr 1.2 2008-08-20 William Jon McCann <[email protected]> * src/gs-manager.c (gs_manager_create_window_for_monitor), (on_screen_monitors_changed), (gs_manager_destroy_windows), (gs_manager_finalize), (gs_manager_create_windows_for_screen): Create or remove windows as number of monitors changes due to randr 1.2 goodness. svn path=/trunk/; revision=1483
0
struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn) { return __gfn_to_memslot(kvm_vcpu_memslots(vcpu), gfn); }
Safe
[ "CWE-416" ]
linux
0774a964ef561b7170d8d1b1bfe6f88002b6d219
1.6145693627699358e+38
4
KVM: Fix out of range accesses to memslots Reset the LRU slot if it becomes invalid when deleting a memslot to fix an out-of-bounds/use-after-free access when searching through memslots. Explicitly check for there being no used slots in search_memslots(), and in the caller of s390's approximation variant. Fixes: 36947254e5f9 ("KVM: Dynamically size memslot array based on number of used slots") Reported-by: Qian Cai <[email protected]> Cc: Peter Xu <[email protected]> Signed-off-by: Sean Christopherson <[email protected]> Message-Id: <[email protected]> Acked-by: Christian Borntraeger <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
0
lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, union futex_key *key, struct futex_pi_state **ps) { struct futex_pi_state *pi_state = NULL; struct futex_q *this, *next; struct plist_head *head; struct task_struct *p; pid_t pid = uval & FUTEX_TID_MASK; head = &hb->chain; plist_for_each_entry_safe(this, next, head, list) { if (match_futex(&this->key, key)) { /* * Another waiter already exists - bump up * the refcount and return its pi_state: */ pi_state = this->pi_state; /* * Userspace might have messed up non PI and PI futexes */ if (unlikely(!pi_state)) return -EINVAL; WARN_ON(!atomic_read(&pi_state->refcount)); /* * When pi_state->owner is NULL then the owner died * and another waiter is on the fly. pi_state->owner * is fixed up by the task which acquires * pi_state->rt_mutex. * * We do not check for pid == 0 which can happen when * the owner died and robust_list_exit() cleared the * TID. */ if (pid && pi_state->owner) { /* * Bail out if user space manipulated the * futex value. */ if (pid != task_pid_vnr(pi_state->owner)) return -EINVAL; } atomic_inc(&pi_state->refcount); *ps = pi_state; return 0; } } /* * We are the first waiter - try to look up the real owner and attach * the new pi_state to it, but bail out when TID = 0 */ if (!pid) return -ESRCH; p = futex_find_get_task(pid); if (!p) return -ESRCH; /* * We need to look at the task state flags to figure out, * whether the task is exiting. To protect against the do_exit * change of the task flags, we do this protected by * p->pi_lock: */ raw_spin_lock_irq(&p->pi_lock); if (unlikely(p->flags & PF_EXITING)) { /* * The task is on the way out. When PF_EXITPIDONE is * set, we know that the task has finished the * cleanup: */ int ret = (p->flags & PF_EXITPIDONE) ? -ESRCH : -EAGAIN; raw_spin_unlock_irq(&p->pi_lock); put_task_struct(p); return ret; } pi_state = alloc_pi_state(); /* * Initialize the pi_mutex in locked state and make 'p' * the owner of it: */ rt_mutex_init_proxy_locked(&pi_state->pi_mutex, p); /* Store the key for possible exit cleanups: */ pi_state->key = *key; WARN_ON(!list_empty(&pi_state->list)); list_add(&pi_state->list, &p->pi_state_list); pi_state->owner = p; raw_spin_unlock_irq(&p->pi_lock); put_task_struct(p); *ps = pi_state; return 0; }
Safe
[ "CWE-119", "CWE-787" ]
linux
7ada876a8703f23befbb20a7465a702ee39b1704
9.409082790865008e+37
104
futex: Fix errors in nested key ref-counting futex_wait() is leaking key references due to futex_wait_setup() acquiring an additional reference via the queue_lock() routine. The nested key ref-counting has been masking bugs and complicating code analysis. queue_lock() is only called with a previously ref-counted key, so remove the additional ref-counting from the queue_(un)lock() functions. Also futex_wait_requeue_pi() drops one key reference too many in unqueue_me_pi(). Remove the key reference handling from unqueue_me_pi(). This was paired with a queue_lock() in futex_lock_pi(), so the count remains unchanged. Document remaining nested key ref-counting sites. Signed-off-by: Darren Hart <[email protected]> Reported-and-tested-by: Matthieu Fertré<[email protected]> Reported-by: Louis Rilling<[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: John Kacur <[email protected]> Cc: Rusty Russell <[email protected]> LKML-Reference: <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected]
0
Error Box_colr::write(StreamWriter& writer) const { size_t box_start = reserve_box_header_space(writer); assert(m_color_profile); writer.write32(m_color_profile->get_type()); Error err = m_color_profile->write(writer); if (err) { return err; } prepend_header(writer, box_start); return Error::Ok; }
Safe
[ "CWE-703" ]
libheif
2710c930918609caaf0a664e9c7bc3dce05d5b58
2.008071900885226e+38
17
force fraction to a limited resolution to finally solve those pesky numerical edge cases
0
_outWindowDef(StringInfo str, const WindowDef *node) { WRITE_NODE_TYPE("WINDOWDEF"); WRITE_STRING_FIELD(name); WRITE_STRING_FIELD(refname); WRITE_NODE_FIELD(partitionClause); WRITE_NODE_FIELD(orderClause); WRITE_INT_FIELD(frameOptions); WRITE_NODE_FIELD(startOffset); WRITE_NODE_FIELD(endOffset); WRITE_LOCATION_FIELD(location); }
Safe
[ "CWE-362" ]
postgres
5f173040e324f6c2eebb90d86cf1b0cdb5890f0a
3.614718409777e+37
13
Avoid repeated name lookups during table and index DDL. If the name lookups come to different conclusions due to concurrent activity, we might perform some parts of the DDL on a different table than other parts. At least in the case of CREATE INDEX, this can be used to cause the permissions checks to be performed against a different table than the index creation, allowing for a privilege escalation attack. This changes the calling convention for DefineIndex, CreateTrigger, transformIndexStmt, transformAlterTableStmt, CheckIndexCompatible (in 9.2 and newer), and AlterTable (in 9.1 and older). In addition, CheckRelationOwnership is removed in 9.2 and newer and the calling convention is changed in older branches. A field has also been added to the Constraint node (FkConstraint in 8.4). Third-party code calling these functions or using the Constraint node will require updating. Report by Andres Freund. Patch by Robert Haas and Andres Freund, reviewed by Tom Lane. Security: CVE-2014-0062
0
int xt_register_target(struct xt_target *target) { u_int8_t af = target->family; mutex_lock(&xt[af].mutex); list_add(&target->list, &xt[af].target); mutex_unlock(&xt[af].mutex); return 0; }
Safe
[ "CWE-119" ]
nf-next
d7591f0c41ce3e67600a982bab6989ef0f07b3ce
1.5292579459253856e+38
9
netfilter: x_tables: introduce and use xt_copy_counters_from_user The three variants use same copy&pasted code, condense this into a helper and use that. Make sure info.name is 0-terminated. Signed-off-by: Florian Westphal <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
0
SpoolssEndPagePrinter_q(tvbuff_t *tvb, int offset, packet_info *pinfo, proto_tree *tree, dcerpc_info *di, guint8 *drep) { e_ctx_hnd policy_hnd; char *pol_name; /* Parse packet */ offset = dissect_nt_policy_hnd( tvb, offset, pinfo, tree, di, drep, hf_hnd, &policy_hnd, NULL, FALSE, FALSE); dcerpc_fetch_polhnd_data(&policy_hnd, &pol_name, NULL, NULL, NULL, pinfo->num); if (pol_name) col_append_fstr(pinfo->cinfo, COL_INFO, ", %s", pol_name); return offset; }
Safe
[ "CWE-399" ]
wireshark
b4d16b4495b732888e12baf5b8a7e9bf2665e22b
2.53963123161819e+38
22
SPOOLSS: Try to avoid an infinite loop. Use tvb_reported_length_remaining in dissect_spoolss_uint16uni. Make sure our offset always increments in dissect_spoolss_keybuffer. Change-Id: I7017c9685bb2fa27161d80a03b8fca4ef630e793 Reviewed-on: https://code.wireshark.org/review/14687 Reviewed-by: Gerald Combs <[email protected]> Petri-Dish: Gerald Combs <[email protected]> Tested-by: Petri Dish Buildbot <[email protected]> Reviewed-by: Michael Mann <[email protected]>
0
ssize_t qemu_receive_packet(NetClientState *nc, const uint8_t *buf, int size) { if (!qemu_can_receive_packet(nc)) { return 0; } return qemu_net_queue_receive(nc->incoming_queue, buf, size); }
Safe
[ "CWE-835" ]
qemu
705df5466c98f3efdd2b68d3b31dad86858acad7
1.6016109494105024e+38
8
net: introduce qemu_receive_packet() Some NIC supports loopback mode and this is done by calling nc->info->receive() directly which in fact suppresses the effort of reentrancy check that is done in qemu_net_queue_send(). Unfortunately we can't use qemu_net_queue_send() here since for loopback there's no sender as peer, so this patch introduce a qemu_receive_packet() which is used for implementing loopback mode for a NIC with this check. NIC that supports loopback mode will be converted to this helper. This is intended to address CVE-2021-3416. Cc: Prasad J Pandit <[email protected]> Reviewed-by: Philippe Mathieu-Daudé <[email protected]> Cc: [email protected] Signed-off-by: Jason Wang <[email protected]>
0
static char **alloc_pg_vec(struct tpacket_req *req, int order) { unsigned int block_nr = req->tp_block_nr; char **pg_vec; int i; pg_vec = kzalloc(block_nr * sizeof(char *), GFP_KERNEL); if (unlikely(!pg_vec)) goto out; for (i = 0; i < block_nr; i++) { pg_vec[i] = alloc_one_pg_vec_page(order); if (unlikely(!pg_vec[i])) goto out_free_pgvec; } out: return pg_vec; out_free_pgvec: free_pg_vec(pg_vec, order, block_nr); pg_vec = NULL; goto out; }
Safe
[ "CWE-909" ]
linux-2.6
67286640f638f5ad41a946b9a3dc75327950248f
2.867818188046524e+38
24
net: packet: fix information leak to userland packet_getname_spkt() doesn't initialize all members of sa_data field of sockaddr struct if strlen(dev->name) < 13. This structure is then copied to userland. It leads to leaking of contents of kernel stack memory. We have to fully fill sa_data with strncpy() instead of strlcpy(). The same with packet_getname(): it doesn't initialize sll_pkttype field of sockaddr_ll. Set it to zero. Signed-off-by: Vasiliy Kulikov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
0
explicit EncodePngOp(OpKernelConstruction* context) : OpKernel(context) { OP_REQUIRES_OK(context, context->GetAttr("compression", &compression_)); OP_REQUIRES(context, -1 <= compression_ && compression_ <= 9, errors::InvalidArgument("compression should be in [-1,9], got ", compression_)); DataType dt = context->input_type(0); OP_REQUIRES(context, dt == DataType::DT_UINT8 || dt == DataType::DT_UINT16, errors::InvalidArgument( "image must have type uint8 or uint16, got ", dt)); if (dt == DataType::DT_UINT8) { desired_channel_bits_ = 8; } else { desired_channel_bits_ = 16; } }
Safe
[ "CWE-754", "CWE-787" ]
tensorflow
26eb323554ffccd173e8a79a8c05c15b685ae4d1
1.9505271423752984e+38
17
Fix null CHECK issue with `tf.raw_ops.EncodePng`. PiperOrigin-RevId: 369717714 Change-Id: I24136cd99c20b8466671f4f93b670ef6f6dd1250
0
void throwPreloaderSpawnException(const string &msg, SpawnException::ErrorKind errorKind, BackgroundIOCapturerPtr &stderrCapturer, const DebugDirPtr &debugDir) { TRACE_POINT(); // Stop the stderr capturing thread and get the captured stderr // output so far. string stderrOutput; if (stderrCapturer != NULL) { stderrOutput = stderrCapturer->stop(); } // If the exception wasn't due to a timeout, try to capture the // remaining stderr output for at most 2 seconds. if (errorKind != SpawnException::PRELOADER_STARTUP_TIMEOUT && errorKind != SpawnException::APP_STARTUP_TIMEOUT && stderrCapturer != NULL) { bool done = false; unsigned long long timeout = 2000; while (!done) { char buf[1024 * 32]; unsigned int ret; try { ret = readExact(stderrCapturer->getFd(), buf, sizeof(buf), &timeout); if (ret == 0) { done = true; } else { stderrOutput.append(buf, ret); } } catch (const SystemException &e) { P_WARN("Stderr I/O capture error: " << e.what()); done = true; } catch (const TimeoutException &) { done = true; } } } stderrCapturer.reset(); // Now throw SpawnException with the captured stderr output // as error response. SpawnException e(msg, stderrOutput, false, errorKind); e.setPreloaderCommand(getPreloaderCommandString()); annotatePreloaderException(e, debugDir); throw e; }
Safe
[]
passenger
8c6693e0818772c345c979840d28312c2edd4ba4
1.9558618198563174e+38
49
Security check socket filenames reported by spawned application processes.
0
GF_Err gf_isom_get_fragment_defaults(GF_ISOFile *the_file, u32 trackNumber, u32 *defaultDuration, u32 *defaultSize, u32 *defaultDescriptionIndex, u32 *defaultRandomAccess, u8 *defaultPadding, u16 *defaultDegradationPriority) { GF_TrackBox *trak; GF_StscEntry *sc_ent; u32 i, j, maxValue, value; #ifndef GPAC_DISABLE_ISOM_FRAGMENTS GF_TrackExtendsBox *trex; #endif GF_SampleTableBox *stbl; trak = gf_isom_get_track_from_file(the_file, trackNumber); if (!trak) return GF_BAD_PARAM; /*if trex is already set, restore flags*/ #ifndef GPAC_DISABLE_ISOM_FRAGMENTS trex = the_file->moov->mvex ? GetTrex(the_file->moov, gf_isom_get_track_id(the_file,trackNumber) ) : NULL; if (trex) { trex->track = trak; if (defaultDuration) *defaultDuration = trex->def_sample_duration; if (defaultSize) *defaultSize = trex->def_sample_size; if (defaultDescriptionIndex) *defaultDescriptionIndex = trex->def_sample_desc_index; if (defaultRandomAccess) *defaultRandomAccess = GF_ISOM_GET_FRAG_SYNC(trex->def_sample_flags); if (defaultPadding) *defaultPadding = GF_ISOM_GET_FRAG_PAD(trex->def_sample_flags); if (defaultDegradationPriority) *defaultDegradationPriority = GF_ISOM_GET_FRAG_DEG(trex->def_sample_flags); return GF_OK; } #endif stbl = trak->Media->information->sampleTable; if (!stbl->TimeToSample || !stbl->SampleSize || !stbl->SampleToChunk) return GF_ISOM_INVALID_FILE; //duration if (defaultDuration) { maxValue = value = 0; for (i=0; i<stbl->TimeToSample->nb_entries; i++) { if (stbl->TimeToSample->entries[i].sampleCount>maxValue) { value = stbl->TimeToSample->entries[i].sampleDelta; maxValue = stbl->TimeToSample->entries[i].sampleCount; } } *defaultDuration = value; } //size if (defaultSize) { *defaultSize = stbl->SampleSize->sampleSize; } //descIndex if (defaultDescriptionIndex) { GF_SampleToChunkBox *stsc= stbl->SampleToChunk; maxValue = value = 0; for (i=0; i<stsc->nb_entries; i++) { sc_ent = &stsc->entries[i]; if ((sc_ent->nextChunk - sc_ent->firstChunk) * sc_ent->samplesPerChunk > maxValue) { value = sc_ent->sampleDescriptionIndex; maxValue = (sc_ent->nextChunk - sc_ent->firstChunk) * sc_ent->samplesPerChunk; } } *defaultDescriptionIndex = value ? value : 1; } //RAP if (defaultRandomAccess) { //no sync table is ALL RAP *defaultRandomAccess = stbl->SyncSample ? 0 : 1; if (stbl->SyncSample && (stbl->SyncSample->nb_entries == stbl->SampleSize->sampleCount)) { *defaultRandomAccess = 1; } } //defaultPadding if (defaultPadding) { *defaultPadding = 0; if (stbl->PaddingBits) { maxValue = 0; for (i=0; i<stbl->PaddingBits->SampleCount; i++) { value = 0; for (j=0; j<stbl->PaddingBits->SampleCount; j++) { if (stbl->PaddingBits->padbits[i]==stbl->PaddingBits->padbits[j]) { value ++; } } if (value>maxValue) { maxValue = value; *defaultPadding = stbl->PaddingBits->padbits[i]; } } } } //defaultDegradationPriority if (defaultDegradationPriority) { *defaultDegradationPriority = 0; if (stbl->DegradationPriority) { maxValue = 0; for (i=0; i<stbl->DegradationPriority->nb_entries; i++) { value = 0; for (j=0; j<stbl->DegradationPriority->nb_entries; j++) { if (stbl->DegradationPriority->priorities[i]==stbl->DegradationPriority->priorities[j]) { value ++; } } if (value>maxValue) { maxValue = value; *defaultDegradationPriority = stbl->DegradationPriority->priorities[i]; } } } } return GF_OK; }
Safe
[ "CWE-476" ]
gpac
ebfa346eff05049718f7b80041093b4c5581c24e
1.0201839648047581e+37
111
fixed #1706
0
HandleVncAuth(rfbClient *client) { uint8_t challenge[CHALLENGESIZE]; char *passwd=NULL; int i; if (!ReadFromRFBServer(client, (char *)challenge, CHALLENGESIZE)) return FALSE; if (client->serverPort!=-1) { /* if not playing a vncrec file */ if (client->GetPassword) passwd = client->GetPassword(client); if ((!passwd) || (strlen(passwd) == 0)) { rfbClientLog("Reading password failed\n"); return FALSE; } if (strlen(passwd) > 8) { passwd[8] = '\0'; } rfbClientEncryptBytes(challenge, passwd); /* Lose the password from memory */ for (i = strlen(passwd); i >= 0; i--) { passwd[i] = '\0'; } free(passwd); if (!WriteToRFBServer(client, (char *)challenge, CHALLENGESIZE)) return FALSE; } /* Handle the SecurityResult message */ if (!rfbHandleAuthResult(client)) return FALSE; return TRUE; }
Safe
[ "CWE-20" ]
libvncserver
85a778c0e45e87e35ee7199f1f25020648e8b812
1.6494350118245202e+38
36
Check for MallocFrameBuffer() return value If MallocFrameBuffer() returns FALSE, frame buffer pointer is left to NULL. Subsequent writes into that buffer could lead to memory corruption, or even arbitrary code execution.
0
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
160