Bugzilla – Attachment 49008 Details for
Bug 115200
IPsec crashes kernel
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
IDP Log In
|
Forgot Password
[patch]
patch-2.6.13-rc2-git3-21
patch-2.6.13-rc2-git3-21 (text/plain), 15.59 KB, created by
Olaf Hering
on 2005-09-06 23:58:35 UTC
(
hide
)
Description:
patch-2.6.13-rc2-git3-21
Filename:
MIME Type:
Creator:
Olaf Hering
Created:
2005-09-06 23:58:35 UTC
Size:
15.59 KB
patch
obsolete
>Return-Path: <bk-commits-head-owner@vger.kernel.org> >X-Original-To: olh@wotan.suse.de >Received: from Relay1.suse.de (relay1.suse.de [IPv6:2001:780:101:0:211:25ff:fe4a:6dba]) > by wotan.suse.de (Postfix) with ESMTP id 81CE232E6E9 > for <olh@wotan.suse.de>; Thu, 7 Jul 2005 04:11:49 +0200 (CEST) >Received: by Relay1.suse.de (Postfix) > id 6C8FAC9DA; Thu, 7 Jul 2005 04:11:49 +0200 (CEST) >Received: from Relay1.suse.de (localhost [127.0.0.1]) > by Relay1.suse.de (Postfix) with ESMTP id 4FEA2CFFA; > Thu, 7 Jul 2005 04:11:49 +0200 (CEST) >Received: from Relay1.suse.de ([127.0.0.1]) > by Relay1.suse.de (Relay1 [127.0.0.1]) (amavisd-new, port 10026) with ESMTP > id 04566-19; Thu, 7 Jul 2005 04:11:48 +0200 (CEST) >Received: from mx1.suse.de (mx1.suse.de [195.135.220.2]) > (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) > (No client certificate requested) > by Relay1.suse.de (Postfix) with ESMTP id 137B0C9DA; > Thu, 7 Jul 2005 04:11:48 +0200 (CEST) >Received: from vger.kernel.org (vger.kernel.org [12.107.209.244]) > by mx1.suse.de (Postfix) with ESMTP id 81B51EDF1; > Thu, 7 Jul 2005 04:11:46 +0200 (CEST) >Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand > id S262434AbVGGCIQ (ORCPT <rfc822;agruen@suse.de> + 1 other); > Wed, 6 Jul 2005 22:08:16 -0400 >Received: (majordomo@vger.kernel.org) by vger.kernel.org id S262431AbVGGCDj > (ORCPT <rfc822;bk-commits-head-outgoing>); > Wed, 6 Jul 2005 22:03:39 -0400 >Received: from hera.kernel.org ([209.128.68.125]:46239 "EHLO hera.kernel.org") > by vger.kernel.org with ESMTP id S262399AbVGGCD1 (ORCPT > <rfc822;bk-commits-head@vger.kernel.org>); > Wed, 6 Jul 2005 22:03:27 -0400 >Received: from hera.kernel.org (localhost [127.0.0.1]) > by hera.kernel.org (8.13.1/8.13.1) with ESMTP id j6723MIP027149 > for <bk-commits-head@vger.kernel.org>; Wed, 6 Jul 2005 19:03:22 -0700 >Received: (from dwmw2@localhost) > by hera.kernel.org (8.13.1/8.13.1/Submit) id j6723MYJ027148 > for bk-commits-head@vger.kernel.org; Wed, 6 Jul 2005 19:03:22 -0700 >Old-Date: Wed, 6 Jul 2005 19:03:22 -0700 >Message-Id: <200507070203.j6723MYJ027148@hera.kernel.org> >From: Linux Kernel Mailing List <linux-kernel@vger.kernel.org> >To: bk-commits-head@vger.kernel.org >Subject: [CRYPTO] Add plumbing for multi-block operations >MIME-Version: 1.0 >Content-Type: text/plain; charset=UTF-8 >X-Git-Commit: c774e93e2152d0be2612739418689e6e6400f4eb >X-Git-Parent: 8279dd748f9704b811e528b31304e2fab026abc5 >X-Virus-Status: Clean >Sender: bk-commits-head-owner@vger.kernel.org >Precedence: bulk >X-Mailing-List: bk-commits-head@vger.kernel.org >X-my-mailinglist-tag: bk-commits-head.vger.kernel.org >Date: Thu, 07 Jul 2005 02:11:49 +0000 > >tree abe25ec0577bd95128adb3f38609a09f0a3e2469 >parent 8279dd748f9704b811e528b31304e2fab026abc5 >author Herbert Xu <herbert@gondor.apana.org.au> Thu, 07 Jul 2005 03:51:31 -0700 >committer David S. Miller <davem@davemloft.net> Thu, 07 Jul 2005 03:51:31 -0700 > >[CRYPTO] Add plumbing for multi-block operations > >The VIA Padlock device is able to perform much better when multiple >blocks are fed to it at once. As this device offers an exceptional >throughput rate it is worthwhile to optimise the infrastructure >specifically for it. > >We shift the existing page-sized fast path down to the CBC/ECB functions. >We can then replace the CBC/ECB functions with functions provided by the >underlying algorithm that performs the multi-block operations. > >As a side-effect this improves the performance of large cipher operations >for all existing algorithm implementations. I've measured the gain to be >around 5% for 3DES and 15% for AES. > >Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> >Signed-off-by: David S. Miller <davem@davemloft.net> > > crypto/cipher.c | 250 ++++++++++++++++++++++++++++++++------------------- > crypto/scatterwalk.c | 4 > crypto/scatterwalk.h | 6 - > 3 files changed, 163 insertions(+), 97 deletions(-) > >diff --git a/crypto/cipher.c b/crypto/cipher.c >--- a/crypto/cipher.c >+++ b/crypto/cipher.c >@@ -4,6 +4,7 @@ > * Cipher operations. > * > * Copyright (c) 2002 James Morris <jmorris@intercode.com.au> >+ * Copyright (c) 2005 Herbert Xu <herbert@gondor.apana.org.au> > * > * This program is free software; you can redistribute it and/or modify it > * under the terms of the GNU General Public License as published by the Free >@@ -22,9 +23,13 @@ > #include "internal.h" > #include "scatterwalk.h" > >-typedef void (cryptfn_t)(void *, u8 *, const u8 *); >-typedef void (procfn_t)(struct crypto_tfm *, u8 *, >- u8*, cryptfn_t, void *); >+struct cipher_desc { >+ struct crypto_tfm *tfm; >+ void (*crfn)(void *ctx, u8 *dst, const u8 *src); >+ unsigned int (*prfn)(const struct cipher_desc *desc, u8 *dst, >+ const u8 *src, unsigned int nbytes); >+ void *info; >+}; > > static inline void xor_64(u8 *a, const u8 *b) > { >@@ -39,63 +44,57 @@ static inline void xor_128(u8 *a, const > ((u32 *)a)[2] ^= ((u32 *)b)[2]; > ((u32 *)a)[3] ^= ((u32 *)b)[3]; > } >- >-static inline void *prepare_src(struct scatter_walk *walk, int bsize, >- void *tmp, int in_place) >-{ >- void *src = walk->data; >- int n = bsize; >- >- if (unlikely(scatterwalk_across_pages(walk, bsize))) { >- src = tmp; >- n = scatterwalk_copychunks(src, walk, bsize, 0); >- } >- scatterwalk_advance(walk, n); >- return src; >-} > >-static inline void *prepare_dst(struct scatter_walk *walk, int bsize, >- void *tmp, int in_place) >+static unsigned int crypt_slow(const struct cipher_desc *desc, >+ struct scatter_walk *in, >+ struct scatter_walk *out, unsigned int bsize) > { >- void *dst = walk->data; >+ u8 src[bsize]; >+ u8 dst[bsize]; >+ unsigned int n; > >- if (unlikely(scatterwalk_across_pages(walk, bsize)) || in_place) >- dst = tmp; >- return dst; >-} >+ n = scatterwalk_copychunks(src, in, bsize, 0); >+ scatterwalk_advance(in, n); > >-static inline void complete_src(struct scatter_walk *walk, int bsize, >- void *src, int in_place) >-{ >+ desc->prfn(desc, dst, src, bsize); >+ >+ n = scatterwalk_copychunks(dst, out, bsize, 1); >+ scatterwalk_advance(out, n); >+ >+ return bsize; > } > >-static inline void complete_dst(struct scatter_walk *walk, int bsize, >- void *dst, int in_place) >+static inline unsigned int crypt_fast(const struct cipher_desc *desc, >+ struct scatter_walk *in, >+ struct scatter_walk *out, >+ unsigned int nbytes) > { >- int n = bsize; >+ u8 *src, *dst; >+ >+ src = in->data; >+ dst = scatterwalk_samebuf(in, out) ? src : out->data; >+ >+ nbytes = desc->prfn(desc, dst, src, nbytes); >+ >+ scatterwalk_advance(in, nbytes); >+ scatterwalk_advance(out, nbytes); > >- if (unlikely(scatterwalk_across_pages(walk, bsize))) >- n = scatterwalk_copychunks(dst, walk, bsize, 1); >- else if (in_place) >- memcpy(walk->data, dst, bsize); >- scatterwalk_advance(walk, n); >+ return nbytes; > } > > /* > * Generic encrypt/decrypt wrapper for ciphers, handles operations across > * multiple page boundaries by using temporary blocks. In user context, >- * the kernel is given a chance to schedule us once per block. >+ * the kernel is given a chance to schedule us once per page. > */ >-static int crypt(struct crypto_tfm *tfm, >+static int crypt(const struct cipher_desc *desc, > struct scatterlist *dst, > struct scatterlist *src, >- unsigned int nbytes, cryptfn_t crfn, >- procfn_t prfn, void *info) >+ unsigned int nbytes) > { > struct scatter_walk walk_in, walk_out; >+ struct crypto_tfm *tfm = desc->tfm; > const unsigned int bsize = crypto_tfm_alg_blocksize(tfm); >- u8 tmp_src[bsize]; >- u8 tmp_dst[bsize]; > > if (!nbytes) > return 0; >@@ -109,29 +108,20 @@ static int crypt(struct crypto_tfm *tfm, > scatterwalk_start(&walk_out, dst); > > for(;;) { >- u8 *src_p, *dst_p; >- int in_place; >+ unsigned int n; > > scatterwalk_map(&walk_in, 0); > scatterwalk_map(&walk_out, 1); > >- in_place = scatterwalk_samebuf(&walk_in, &walk_out); >+ n = scatterwalk_clamp(&walk_in, nbytes); >+ n = scatterwalk_clamp(&walk_out, n); > >- do { >- src_p = prepare_src(&walk_in, bsize, tmp_src, >- in_place); >- dst_p = prepare_dst(&walk_out, bsize, tmp_dst, >- in_place); >- >- prfn(tfm, dst_p, src_p, crfn, info); >- >- complete_src(&walk_in, bsize, src_p, in_place); >- complete_dst(&walk_out, bsize, dst_p, in_place); >- >- nbytes -= bsize; >- } while (nbytes && >- !scatterwalk_across_pages(&walk_in, bsize) && >- !scatterwalk_across_pages(&walk_out, bsize)); >+ if (likely(n >= bsize)) >+ n = crypt_fast(desc, &walk_in, &walk_out, n); >+ else >+ n = crypt_slow(desc, &walk_in, &walk_out, bsize); >+ >+ nbytes -= n; > > scatterwalk_done(&walk_in, 0, nbytes); > scatterwalk_done(&walk_out, 1, nbytes); >@@ -143,30 +133,78 @@ static int crypt(struct crypto_tfm *tfm, > } > } > >-static void cbc_process_encrypt(struct crypto_tfm *tfm, u8 *dst, u8 *src, >- cryptfn_t fn, void *info) >+static unsigned int cbc_process_encrypt(const struct cipher_desc *desc, >+ u8 *dst, const u8 *src, >+ unsigned int nbytes) > { >- u8 *iv = info; >+ struct crypto_tfm *tfm = desc->tfm; >+ void (*xor)(u8 *, const u8 *) = tfm->crt_u.cipher.cit_xor_block; >+ int bsize = crypto_tfm_alg_blocksize(tfm); >+ >+ void (*fn)(void *, u8 *, const u8 *) = desc->crfn; >+ u8 *iv = desc->info; >+ unsigned int done = 0; >+ >+ do { >+ xor(iv, src); >+ fn(crypto_tfm_ctx(tfm), dst, iv); >+ memcpy(iv, dst, bsize); >+ >+ src += bsize; >+ dst += bsize; >+ } while ((done += bsize) < nbytes); > >- tfm->crt_u.cipher.cit_xor_block(iv, src); >- fn(crypto_tfm_ctx(tfm), dst, iv); >- memcpy(iv, dst, crypto_tfm_alg_blocksize(tfm)); >+ return done; > } > >-static void cbc_process_decrypt(struct crypto_tfm *tfm, u8 *dst, u8 *src, >- cryptfn_t fn, void *info) >+static unsigned int cbc_process_decrypt(const struct cipher_desc *desc, >+ u8 *dst, const u8 *src, >+ unsigned int nbytes) > { >- u8 *iv = info; >+ struct crypto_tfm *tfm = desc->tfm; >+ void (*xor)(u8 *, const u8 *) = tfm->crt_u.cipher.cit_xor_block; >+ int bsize = crypto_tfm_alg_blocksize(tfm); >+ >+ u8 stack[src == dst ? bsize : 0]; >+ u8 *buf = stack; >+ u8 **dst_p = src == dst ? &buf : &dst; >+ >+ void (*fn)(void *, u8 *, const u8 *) = desc->crfn; >+ u8 *iv = desc->info; >+ unsigned int done = 0; >+ >+ do { >+ u8 *tmp_dst = *dst_p; >+ >+ fn(crypto_tfm_ctx(tfm), tmp_dst, src); >+ xor(tmp_dst, iv); >+ memcpy(iv, src, bsize); >+ if (tmp_dst != dst) >+ memcpy(dst, tmp_dst, bsize); > >- fn(crypto_tfm_ctx(tfm), dst, src); >- tfm->crt_u.cipher.cit_xor_block(dst, iv); >- memcpy(iv, src, crypto_tfm_alg_blocksize(tfm)); >+ src += bsize; >+ dst += bsize; >+ } while ((done += bsize) < nbytes); >+ >+ return done; > } > >-static void ecb_process(struct crypto_tfm *tfm, u8 *dst, u8 *src, >- cryptfn_t fn, void *info) >+static unsigned int ecb_process(const struct cipher_desc *desc, u8 *dst, >+ const u8 *src, unsigned int nbytes) > { >- fn(crypto_tfm_ctx(tfm), dst, src); >+ struct crypto_tfm *tfm = desc->tfm; >+ int bsize = crypto_tfm_alg_blocksize(tfm); >+ void (*fn)(void *, u8 *, const u8 *) = desc->crfn; >+ unsigned int done = 0; >+ >+ do { >+ fn(crypto_tfm_ctx(tfm), dst, src); >+ >+ src += bsize; >+ dst += bsize; >+ } while ((done += bsize) < nbytes); >+ >+ return done; > } > > static int setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) >@@ -185,9 +223,13 @@ static int ecb_encrypt(struct crypto_tfm > struct scatterlist *dst, > struct scatterlist *src, unsigned int nbytes) > { >- return crypt(tfm, dst, src, nbytes, >- tfm->__crt_alg->cra_cipher.cia_encrypt, >- ecb_process, NULL); >+ struct cipher_desc desc; >+ >+ desc.tfm = tfm; >+ desc.crfn = tfm->__crt_alg->cra_cipher.cia_encrypt; >+ desc.prfn = ecb_process; >+ >+ return crypt(&desc, dst, src, nbytes); > } > > static int ecb_decrypt(struct crypto_tfm *tfm, >@@ -195,9 +237,13 @@ static int ecb_decrypt(struct crypto_tfm > struct scatterlist *src, > unsigned int nbytes) > { >- return crypt(tfm, dst, src, nbytes, >- tfm->__crt_alg->cra_cipher.cia_decrypt, >- ecb_process, NULL); >+ struct cipher_desc desc; >+ >+ desc.tfm = tfm; >+ desc.crfn = tfm->__crt_alg->cra_cipher.cia_decrypt; >+ desc.prfn = ecb_process; >+ >+ return crypt(&desc, dst, src, nbytes); > } > > static int cbc_encrypt(struct crypto_tfm *tfm, >@@ -205,9 +251,14 @@ static int cbc_encrypt(struct crypto_tfm > struct scatterlist *src, > unsigned int nbytes) > { >- return crypt(tfm, dst, src, nbytes, >- tfm->__crt_alg->cra_cipher.cia_encrypt, >- cbc_process_encrypt, tfm->crt_cipher.cit_iv); >+ struct cipher_desc desc; >+ >+ desc.tfm = tfm; >+ desc.crfn = tfm->__crt_alg->cra_cipher.cia_encrypt; >+ desc.prfn = cbc_process_encrypt; >+ desc.info = tfm->crt_cipher.cit_iv; >+ >+ return crypt(&desc, dst, src, nbytes); > } > > static int cbc_encrypt_iv(struct crypto_tfm *tfm, >@@ -215,9 +266,14 @@ static int cbc_encrypt_iv(struct crypto_ > struct scatterlist *src, > unsigned int nbytes, u8 *iv) > { >- return crypt(tfm, dst, src, nbytes, >- tfm->__crt_alg->cra_cipher.cia_encrypt, >- cbc_process_encrypt, iv); >+ struct cipher_desc desc; >+ >+ desc.tfm = tfm; >+ desc.crfn = tfm->__crt_alg->cra_cipher.cia_encrypt; >+ desc.prfn = cbc_process_encrypt; >+ desc.info = iv; >+ >+ return crypt(&desc, dst, src, nbytes); > } > > static int cbc_decrypt(struct crypto_tfm *tfm, >@@ -225,9 +281,14 @@ static int cbc_decrypt(struct crypto_tfm > struct scatterlist *src, > unsigned int nbytes) > { >- return crypt(tfm, dst, src, nbytes, >- tfm->__crt_alg->cra_cipher.cia_decrypt, >- cbc_process_decrypt, tfm->crt_cipher.cit_iv); >+ struct cipher_desc desc; >+ >+ desc.tfm = tfm; >+ desc.crfn = tfm->__crt_alg->cra_cipher.cia_decrypt; >+ desc.prfn = cbc_process_decrypt; >+ desc.info = tfm->crt_cipher.cit_iv; >+ >+ return crypt(&desc, dst, src, nbytes); > } > > static int cbc_decrypt_iv(struct crypto_tfm *tfm, >@@ -235,9 +296,14 @@ static int cbc_decrypt_iv(struct crypto_ > struct scatterlist *src, > unsigned int nbytes, u8 *iv) > { >- return crypt(tfm, dst, src, nbytes, >- tfm->__crt_alg->cra_cipher.cia_decrypt, >- cbc_process_decrypt, iv); >+ struct cipher_desc desc; >+ >+ desc.tfm = tfm; >+ desc.crfn = tfm->__crt_alg->cra_cipher.cia_decrypt; >+ desc.prfn = cbc_process_decrypt; >+ desc.info = iv; >+ >+ return crypt(&desc, dst, src, nbytes); > } > > static int nocrypt(struct crypto_tfm *tfm, >diff --git a/crypto/scatterwalk.c b/crypto/scatterwalk.c >--- a/crypto/scatterwalk.c >+++ b/crypto/scatterwalk.c >@@ -100,7 +100,7 @@ void scatterwalk_done(struct scatter_wal > int scatterwalk_copychunks(void *buf, struct scatter_walk *walk, > size_t nbytes, int out) > { >- do { >+ while (nbytes > walk->len_this_page) { > memcpy_dir(buf, walk->data, walk->len_this_page, out); > buf += walk->len_this_page; > nbytes -= walk->len_this_page; >@@ -108,7 +108,7 @@ int scatterwalk_copychunks(void *buf, st > scatterwalk_unmap(walk, out); > scatterwalk_pagedone(walk, out, 1); > scatterwalk_map(walk, out); >- } while (nbytes > walk->len_this_page); >+ } > > memcpy_dir(buf, walk->data, nbytes, out); > return nbytes; >diff --git a/crypto/scatterwalk.h b/crypto/scatterwalk.h >--- a/crypto/scatterwalk.h >+++ b/crypto/scatterwalk.h >@@ -40,10 +40,10 @@ static inline int scatterwalk_samebuf(st > walk_in->offset == walk_out->offset; > } > >-static inline int scatterwalk_across_pages(struct scatter_walk *walk, >- unsigned int nbytes) >+static inline unsigned int scatterwalk_clamp(struct scatter_walk *walk, >+ unsigned int nbytes) > { >- return nbytes > walk->len_this_page; >+ return nbytes > walk->len_this_page ? walk->len_this_page : nbytes; > } > > static inline void scatterwalk_advance(struct scatter_walk *walk, >- >To unsubscribe from this list: send the line "unsubscribe bk-commits-head" in >the body of a message to majordomo@vger.kernel.org >More majordomo info at http://vger.kernel.org/majordomo-info.html >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
Actions:
View
|
Diff
Attachments on
bug 115200
:
48814
|
48858
| 49008 |
49017
|
49085